Abstract
The process of learning a human’s movement and motor control mechanisms by watching and mimicking human motions was based on visuo-motor control in three dimensional space. However, previous studies regarding the visuo-motor control in three dimensional space have focused on analyzing the tracking tasks along one-dimensional lines or two-dimensional planes using single or multi-joint movements. Therefore, in this study, we developed a new system to quantitatively evaluate visuo-motor control in three-dimensional space based on virtual reality (VR) environment. Our proposed system is designed to analyze circular tracking movements on frontal and sagittal planes in VR space with millimeter level accuracy. In particular, we compared the circular tracking movements under monocular and binocular vision conditions. The results showed that the accuracy of circular tracking movements drops approximately 4.5 times in monocular vision than that in binocular vision on both frontal and sagittal planes. We also found that significant difference can be observed between frontal and sagittal planes for only the accuracy of X-axis in both monocular and binocular visions.
Similar content being viewed by others
Introduction
Visually-guided tracking movements are important in visuo-motor control tasks such as imitation learning of using tools, dance, and sports. Studies regarding the visually-guided tracking movements have focused on tracking tasks along one-dimensional lines or two-dimensional planes based on multi-joint movements in three-dimensional space1,2,3,4,5,6,7,8,9,10 examined visuo-motor tracking of one-dimensional sinusoidal visual targets in manual tracking tasks involved humans and monkeys using multi-joint arm movements. As a result, visuo-motor control varied according to the presence of periodicity in target trajectory. In addition, Beppu et al.4,5 analyzed motor controls along ramp trajectories of patients with cerebellar ataxia and normal subjects in one degree-of-freedom elbow movement tracking tasks. They extracted parameters which could quantitatively evaluate disease severity. Previous studies showed that in these tracking movement tasks, visuo-motor control and/or evaluation parameters varied according to the dimension of the trajectories (i.e. trajectory on a one-dimensional line or trajectory on a two-dimensional plane). Therefore, the visuo-motor control and/or evaluation parameters in the tracking tasks should be determined by the dimension of the trajectories11.
Circular tracking movements are similar to one-dimensional sinusoidal tracking movements in the sense of periodicity. However, different from one-dimensional tracking movements, circular tracking tasks allow continues movements with a uniform velocity on two-dimensional planes7,8,9,10,12,13,14. Studies examined two-dimensional visually-guided tracking movements by carrying out wrist and hand tracking movement tasks using pen tablets, two-dimensional tracers (i.g. computer mouses), and two-dimensional manipulanda. In other words, they only measured and analyzed three dimensionally moveable wrist and hand tracking movements under two-dimensional visual guidance with two-dimensional tracing devices. However, a human visuo-motor control system performs three-dimensional movements by recognizing and adapting changes in three-dimensional environments. How to quantitatively evaluate and analyze human’s natural three-dimensional tracking movements is still an open question.
Recently, with the development of virtual reality (VR) technology, increasing researches, such as wayfinding15, proprioception16, and visuo-motor adaptation17, have been carried out in three-dimensional virtual space. In particular, Anglin et al. investigated the mechanisms of visuo-motor adaptation in head-mounted virtual reality versus conventional training17. They constructed a three-dimensional experimental environment that allowed two-dimensional circular tracking movements. Target error of angle between hand and a target which rotated along the obit was analyzed. A digitalized pen with a tablet was used as the tracer to measure the hand movements. However, they has not been adopted and analyzed the circular tracking movements in a three-dimensional VR environment.
Therefore, in this study, we developed an evaluation system for three-dimensional visuo-motor control in VR environment (see Fig. 1). In particular, we adopted circular tracking task in three-dimensional VR space with the following requirements. (1) Movement tasks can be performed in a VR space with a seamless three-dimensional stereoscopic vision, which resemble seated manual movements in daily life. (2) A virtual target is displayed within arm’s reach. The target can be tracked by arm movements in a three-dimensional coordinate. (3) Tracer’s movements can be precisely measured and quantitatively evaluated.
In addition, to confirm the effectiveness of the developed system, we analyzed two types of target tracking movements: circular tracking movement on frontal plane to the body, and circular tracking movement on sagittal plane to the body. Furthermore, we compared the visuo-motor controls in three-dimensional circular tracking movements between monocular vision and binocular vision conditions.
Results
Figure 2 shows a typical example of circular tracking movement which was performed in 3D VR space. Figure 2A represents circular tracking movements on the frontal plane (see ROT(0) condition in the Procedure section of Methods), while Fig. 2B indicates those on the sagittal plane (see ROT(90) condition in the Procedure section of Methods). Furthermore, upper graphs in Fig. 2A and B show the movements performed under monocular vision condition, while lower graphs show those performed under binocular vision condition. As shown in Fig. 2, tracking accuracy with binocular vision was significantly superior to that with monocular vision on both the frontal plane and the sagittal plane.
In this study, we quantitatively evaluated visuo-motor control in circular tracking movements by analyzing distance errors in the three-dimensional VR space between the target and the tracer. Figure 3A and B represent the errors in monocular and binocular visions on the two planes. On the frontal plane (Fig. 3A), significant difference (p = 8.7 × 10−7) can be observed between the errors in monocular vision (mean error = 76.9, SD = ±33.9) and binocular vision (mean error = 16.9, SD = ±9.6).
On the other hand, for the sagittal plane (Fig. 3B), significant difference (p = 3.9 × 10−8) can also be observed between the errors in monocular vision (mean error = 75.0, SD = ±30.5) and binocular vision (mean error = 15.6, SD = ±10.2). The accuracy of visuo-motor control in circular tracking movements drops approximately 4.5 times in binocular vision than that in monocular vision on both planes.
We also investigated the accuracy of visuo-motor control in circular tracking movements based on each axis of three-dimensional coordinate (Fig. 3C–H). Figure 3C–E represent the errors of each axis on the frontal plane, while Fig. 3F–H indicates those on the sagittal plane. For the binocular vision on the frontal plane (Fig. 3D), the errors on Z-axis (eZ in Fig. 3D, mean value = 13.4) were significantly greater than the errors on X-axis (eX in Fig. 3D, mean value = 5.2) and Y-axis (eY in Fig. 3D, mean value = 5.3) directions (p = 0.0003 for eX,and eZ; p = 0.0004 for eY, and eZ). However, in the monocular vision (Fig. 3C), significant greater the errors on X-axis (eX in Fig. 3C, mean value = 18.4), Y-axis (eY in Fig. 3C, mean value = 20.7), and Z-axis (eZ in Fig. 3C, mean value = 68.8) directions than those in binocular vision can be observed (see Fig. 3E; p = 7.5 × 10−6 for eX, p = 1.8 × 10−6 for eY, p = 1.1 × 10−6 for eZ).
On the other hand, for the binocular vision on the sagittal plane (Fig. 3G), the errors on Y-axis (eY in Fig. 3G, mean value = 5.8) were significantly greater than those on X-axis direction (eX in Fig. 3G, mean value = 3.3; p = 1.0 × 10−7). The errors on Z-axis (eZ in Fig. 3G, mean value = 12.6) were significantly greater than the errors on X-axis and Y-axis directions (p = 0.0009 for eX, and eZ; p = 0.0049 for eY, and eZ). In the monocular vision (Fig. 3F), significant greater errors on X-axis (eX in Fig. 3F, mean value = 11.9), Y-axis (eY in Fig. 3F, mean value = 20.1), and Z-axis (eZ in Fig. 3F, mean value = 68.7) directions than those in the binocular vision can be observed (see Fig. 3H; p = 0.0004 for eX, p = 1.7 × 10−7 for eY, p = 6.0 × 10−8 for eZ).
Figure 4 demonstrates the difference between frontal plane (ROT(0)) and sagittal plane (ROT(90)) in terms of the error of each axis of three-dimensional coordinate. No significant difference was observed between frontal and sagittal planes for Y-axis (eY in Fig. 4) and Z-axis (eZ in Fig. 4) directions, regardless of the vision condition. Whereas, on X-axis direction, significantly greater errors can be observed in frontal plane than those in sagittal plane for both monocular and binocular visions (p = 0.0049 for monocular vision; p = 0.0048 for binocular vision).
Discussion
In this study, we proposed a system which enables us to quantitatively evaluate visuo-motor control in three-dimensional space based on target tracking in VR environment. Specifically, the proposed system enabled us to perform three-dimensional circular tracking movements in VR space with millimeter level accuracy. It is confirmed that three-dimensional visuo-motor control under monocular vision and binocular vision conditions could be analyzed clearly and quantitatively (see Figs 3 and 4). The results indicate that the accuracy of circular tracking movements drops approximately 4.5 times in monocular vision than that in binocular vision on both frontal and sagittal planes. We also compared the visuo-motor control between circular tracking movements on frontal (ROT(0) condition) and sagittal (ROT(90) condition) planes (see Fig. 4). As a result, significant difference can be observed between frontal and sagittal planes for only the accuracy of X-axis in both monocular and binocular visions.
In the following discussion, we will discuss about next two issues: (1) differences in three-dimensional visuo-motor control under monocular vision and binocular vision conditions; (2) possible applications of the proposed system.
Differences in three-dimensional visuo-motor control under monocular vision and binocular vision conditions
Studies on visuo-motor control under different vision conditions have been reported mainly on reaching and grasping arm movements18,19,20,21. As a common result, because of perceptual uncertainty (e.g. precision of proper stimulus cue is low in monocular vision), it required longer time to make a movement. In contrast, velocity and the accuracy of hand movement in binocular vision is faster than that in monocular vision. As reported by Melmoth et al.18, in real environment, the accuracy of reaching and grasping movements in binocular vision had 2.5 to 3 times advantage over that in monocular vision. Our study obtained a similar result, in which the accuracy of circular tracking movements in binocular vision showed approximately 4.5 times advantage over that in monocular vision on both frontal and sagittal planes in three-dimensional VR environment (see Fig. 3A and B). In other words, our study not only confirmed that in VR environment tracking accuracy in binocular vision is superior to monocular vision, but also showed that the parameter precision in the three-dimensional VR environment provided by the proposed system was similar to that in the real environment. Furthermore, our results agree with Anglin(2017)’s results17. However, we carried out the experiments in a three-dimensional environment directly, instead of reproducing a two-dimensional circular movement experiment in a three-dimensional environment.
In this study, more significant decrement in visuo-motor control along Z-axis can be confirmed than that along X and Y axes (see Fig. 3C–H). This result agrees with McKee(2010)’s22 study that in the isolated setting, binocular depth thresholds for objects presented in a real environment were greatly superior to the monocular thresholds by as much as a factor of 18. However, in our study, difference in visuo-motor control along Z-axis on both frontal and sagittal planes has not been observed in neither monocular vision nor binocular vision (see Fig. 4). In real world, cues of occlusion and object size were used in depth perception. This is because the proposed system provided depth cues by presenting occlusion information of the target and the tracer23.
In this study, a similar increasing trend can be observed in the errors along X, Y, and Z axes between frontal and sagittal planes in both monocular and binocular visions (see Fig. 4). This result agrees with Haggard’s report that within a nearby arm reaching distance, positional information can be perceived with an accuracy of millimeter level24. In this study, we could examine the differences in three-dimensional position perception and visuo-motor control between monocular and binocular visions with millimeter level accuracy. We found that within an arm reaching distance, positional information of the target and the tracer can be more accurately perceived in binocular vision than that in monocular vision (see Fig. 4). Servos(1992) reported that perceived distance and perceived size in monocular vision have reduced to 86% and 81% respectively than in binocular vision25. Our result confirmed the differences in perception between the two visions in a three- dimensional VR environment. In addition, significant difference can be observed along X-axis between frontal and sagittal planes. It indicates that different control parameters and strategies were adopted in circular tracking movements on the two planes.
In previous studies, for one or two dimensionally moving targets, the control strategies and major control parameters for tracking movements vary whether the trajectory of target is random or consistent3,9,14. It was reported that the rate of reliance on the two strategies and the main control parameters were determined by the motion range (i.e. degree of freedom), speed, and the presence of visual information of the target1,2,3,4,5,6,7,8,11,12,14. The proposed system enables us to examine control strategies and control parameters with different variables (trajectory type, target speed, target/tracer visibility, etc.) for target-tracking in three-dimensional space. In future, more three-dimensional movements will be examined from the viewpoint of motor control based on not only spatial parameter like the 3D error of this study but also temporal parameters like theta and omega errors of our previous study7.
Future application of the proposed system
In this study, we proposed a VR system that enables quantitative evaluation of visuo-motor control in three-dimensional tracking movements. Conventional studies on tracking movements mainly carried out using linear or planar stimuli and measurement devices1,2,3,4,5,6,7,8,9,10,12,13,14. However, analysis of three-dimensional multi-joint movements based on data obtained by one and two-dimensional devices are insufficient. Our system allows the subjects to perform three degree-of-freedom visuo-motor tracking movements in an immersive three-dimensional VR environment. Various movements can be measured and analyzed with a high degree of accuracy using the proposed system. In future work, our proposed system can be used for the evaluation of illness seriousness and rehabilitation effectiveness for patients with hemiplegic upper limb. It can be also applied to studies on perception of spatial neglect patients26.
Furthermore, similar to eye-hand coordination, the proposed system constructed a VR space where the user can explore the environment based on multisensory integration. In other words, the proposed system evaluates perception in visual space and motor space by tracking a circular moving target with a tracer in a three-dimensional VR environment. In particular, if an object is displayed instead of the subject’s own hand in the VR environment, the subject may experience an extension of their body ownership towards the object, known as the rubber hand illusion (RHI), by performing active movements27. In our system, similar to Iriki(2001)’s study, the subject’s body image is considered to be extended to the virtual tracer28,29. It is reported that the RHI is stronger when the rubber hand is located at the position of one’s real hand30. Furthermore, it has to be noted that efference copy31 and the sensory feedback should be coincided in time to evoke the sense of agency. Therefore, our system displays a virtual tracer to indicate the hand position and movement for gaining the body ownership and sense of agency. In this study, the virtual tracer was spatially and temporally synchronized to the movement of the subject’s hand. In future work, various perception evaluations in either realistic or manipulated conditions can be carried out using the proposed system.
The proposed system can be integrated with a haptic device with force feedback. Thus, not only the visuo-motor control, but also proprioception can be quantitatively evaluated32. Furthermore, spatial and/or force JND of the hand-arm system can be considered as one of the perception evaluation parameters in our future experiments33,34. For example, it can be used for quantitative evaluation in the development of rehabilitation system for stroke patients.
As shown in Table 1, in previous studies on target tracking in one-dimensional or two-dimensional spaces, the target size was set to the same or two to five times greater than the tracer size. In other words, in early studies with normal controls and patients with cerebellar ataxia or sensory ataxia, the target size was set to two to three times of the tracer size to analyze the effects of target speed and visual information in tracking movements3,4,5,35,36,37. In our previous studies38,39, to ensure patients with cerebellar ataxia can perform stable tracking movements in two-dimensional space, we set the target size five times greater than the tracer size. On the other hand, in the studies on target tracking strategies regarding feedback and/or feedforward control in one-dimensional or two-dimensional spaces36,40,41, the target size was set to the same as the tracer size. In this study, we also aimed to quantitatively analyze the target tracking strategies using the proposed system for visuo-motor control. However, our task was a more difficult one which was carried out in a three-dimensional space. Thus, we set the target (1.5 cm) 1.5 times bigger than the tracer (1.0 cm). The proposed system can be also used for analyzing target tracking movements of patients with cerebellar ataxia as well as normal controls. In this case, the target should be set to a bigger size as in our previous studies38,39. Furthermore, to investigate the effect of target size in tracking movements on three-dimensional space is one of our future works.
Methods
Subjects
Seventeen subjects (17 males) with mean age 20.12 ± 0.6 (SD) participated in the experiments. All subjects had a normal or corrected-to-normal vision and were right-handed. None had previously participated in similar studies. All subjects gave written informed consents prior to their participation. Informed consent has been obtained for publication of Fig. 1A. All experiments were conducted in accordance with relevant guidelines and regulations. The protocol was approved by the ethics committees of National Institute of Technology, Gunma College.
Configuration of the proposed system
Figure 1D shows the configuration of the system used in the experiments. In order to build a system allows quantitative evaluation of visuo-motor control in VR space, the system has to have following functions: (1) immersive 3D VR environment, (2) real-time tracking and recording of arm movement, (3) real-time rendering of movement-synchronized tracer in the VR environment. For achieve such a system, we used Unity 3D to build the 3D VR environment and used HTC Vive for the immersive display of it.
The VR environment which provides three-dimensional computer graphics (CG) and surrounding sound were built with Unity. Our experiments were performed on a PC with the following specifications: Intel i7-6700 CPU, 8GB of RAM, and NVIDIA GeForce GTX1070 GPU. Stereoscopic 3D CG was displayed on an HTC Vive HMD (resolution: 2,160 × 1,200; field of view: 110°; refresh rate: 90 Hz). The HTC Vive HMD provides us a hand-held controller which tracks, records and feeds back its spatial coordinate in real-time.
The HMD’s Lighthouse tracking system42 tracks the position of a controller with a position precision of 2 mm. We also measured the spatial error of the system using two experiments (three times for each). (1) Move the controller along x, y, and z axes 40 cm guided by a ruler, and compare with the corresponding positions of the virtual tracer. The position errors of the controller along x, y, and z axes were 0.43 ± 1.55 mm, 0.60 ± 0.60 mm, and 0.97 ± 1.02 mm, respectively. (2) Construct a cube with side length 40 cm in the virtual environment, and measure the position of the virtual tracer by the controller. The position errors along x, y, and z axes were 0.87 ± 0.47 mm, 0.56 ± 1.0 mm, and 0.27 ± 0.55 mm, respectively. The average frame rate of our proposed system in three trials of our experiments were 90.1 ± 0.133 FPS. The resolution of the HMD (HTC Vive) was 2160 × 1200 with a frame rate of 90 FPS. It indicates that the system delay per one frame is no greater than 11.1 msec (1/90 sec).
In our experiments, the controller was held by subjects with their right hands. The coordinates of the tip of the controller were tracked and recorded. The sounds in the experiment were played by a 3D sound speaker (Dr. Three 3D-02, http://www.dr-three.com/products/m3d02.html).
We used the library “SteamVR plugin for Unity v1.1.1” in the development. This library allows us to set several parameters regarding camera, projection method, field of view, and clipping planes. Especially, for the binocular disparity, we set the parameter to Camera(head) in the CameraRigPrefab function to attach the camera to the HMD. Consequently, we confirmed that the average errors of display and tracking were less than 1 mm, which ensured a suitable accuracy for our experiment.
Experimental setup
The system allowed subjects to perform a visually-guided tracking task in a 3D VR environment (Fig. 1). In particular, considering the difficulty of tracking task in 3D space, we set the size of the target 1.5 times bigger that the tracer in this study. In other words, as shown in Fig. 1B, the target was a virtual red ball with a radius of 1.5 cm. Instead of subjects’ own hands or the HMD’s controller, they were shown a virtual stick. A tracer, which was a virtual yellow ball with a radius of 1 cm, was placed at the tip of the stick.
As shown in Fig. 1A, the tip of the controller was associated with the yellow virtual tracer. The subjects holds the handle of the controller during the experiment. The direction of the controller was synchronized with that of the virtual stick (Fig. 1C). In other words, the position of the tracer was synchronized with the subjects’ movements. Furthermore, since the red target ball was rendered transparently, the yellow tracer ball was visible even inside the red target ball. In the experiment, subjects were asked to track the target with the tracer in a 3D VR environment. The target moved constantly along an invisible circular orbit with a radius of 15 cm.
The circular orbit of the target can be defined as follows:
where, P(Px, Py, Pz) is the coordinate of the target and Px, Py, Pz indicate the coordinates for x-axis [m], y-axis [m], z-axis [m] of the target. radius is the radius of the circular movement[m]. r is the rotation angle along y axis \((0.0^\circ \le r < 360.0^\circ )\). θ is the angle of circular movement \((0.0^\circ \le \theta < 360.0^\circ )\). The axis of rotation was set to several orientations according to the requirements of the experiment.
Because the height and arm length of each subject were different, the initial position of the target was calibrated before the experiments. In other words, as shown in Fig. 5A, the display position of the target was calibrated for each subject. Especially, the looking-at position of eye-convergence was accordingly set to the target position for each subject. Subjects were asked to sit straight on a fixed chair and (1) hold the controller to their chin; (2) stretch their arm forward without moving their body. The initial coordinates Px, Py, and Pz of the target were calculated as follows:
where Sx, Sy, and Sz are the coordinates for x-axis [m], y-axis [m], z-axis [m] of the controller in the position of subjects’chin to origin O. Tz is coordinates of the controller when subjects were holding their arm forward. In particular, 0.2 in Pz was set for the working space where to ensure no collision between the target and the HMD during the experiment. 0.15 (15 cm) in Py was set as the radius of the circular movement. The two constants were set according to a preliminary test which to ensure the subject a safe and comfortable working space. The calibration allowed the experiments were carried out under a normalized condition. Beginning and ending sounds were played in each trial. Positions of the tracer and the target were recorded from the start to 1 second after each trial.
Procedure
In this study, we performed an experiment to evaluate three-dimensional visoumotor control quantitatively using circular tracking movements on the frontal and the sagittal planes of the body in the VR space (Fig. 5B and C). Subjects were seated on a chair built for the experiment and wore a HMD. Whether stereoscopic vision can be properly perceived was confirmed orally before the experiment.
For the participants who couldn’t correctly perceive the 3D objects, inter ocular distance for each subject was accordingly adjusted. The inter ocular distance can be set in the Vive HMD function. In this study, the initial inter ocular distance was set to 64 mm according to the average value of Japanese male who were our experiment participants. In other words, the 3D display quality was confirmed for every subject before each experiment.
Subjects were asked to hold the controller by their dominant hand. Calibration was then performed to locate the initial position of the target. The target moved at the speed of 0.25 Hz along the orbit after a three-second count down with sound effect. Subjects were asked to adjust the tracer to the position of the target during the countdown and then perform circular tracking movement. As shown in Fig. 5B and C, the target stopped after three loops. One trial finished with a sound effect after the target stopped for one second. Four trials were performed for frontal plane (ROT(0) condition in Fig. 5B) and sagittal plane (ROT(90) condition in Fig. 5C) respectively. Furthermore, circular tracking movements under binocular vision and monocular vision were also investigated for each condition. The subject’s one eye was physically closed by an eye patch in the monocular condition.
Therefore, for one subject, 16 trials were carried out in the experiment. The first trial was served as exercise and was excluded from analysis.
Statistical test
Group difference (significance test) was assessed by the paired-sample Student’s t-test (ttest function in the statistics toolbox of Matlab Ver. 7.14.0.739(R2012a).
References
Miall, R., Weir, D. & Stein, J. Manual tracking of visual targets by trained monkeys. Behav. Brain Res. 20, 185–201 (1986).
Miall, R., Weir, D. & Stein, J. Planning of movement parameters in a visuo-motor tracking task. Behav. Brain Res. 27, 1–8 (1988).
Miall, R., Weir, D. & Stein, J. Intermittency in human manual tracking tasks. J Mot Behav 27, 53–63 (1993).
Beppu, H., Nagaoka, M. & Tanaka, R. Analysis of cerebellar motor disorders by visually-guided elbow tracking movement. Brain 107, 787–809 (1984).
Beppu, H., Nagaoka, M. & Tanaka, R. Analysis of cerebellar motor disorders by visually-guided elbow tracking movement. 2. contribution of the visual cues on slow ramp pursuit. Brain 110, 1–18 (1987).
Fine, J. M., Ward, K. L. & Amazeen, E. L. Manual coordination with intermittent targets: Velocity information for prospective control. Acta Psychol. 149, 24–31 (2014).
Kim, J., Lee, J., Kakei, S. & Kim, J. Motor control characteristics for circular tracking movements of human wrist. Adv. Robotics 31, 29–39 (2017).
Hayashi, Y. & Sawada, Y. Transition from an antiphase error-correction mode to a synchronization mode in mutual hand tracking. Phys Rev E Stat Nonlin Soft Matter Phys 88, 022704 (2013).
Roitman, A. V., Massaquoi, S. G., Takahashi, K. & Ebner, T. J. Kinematic analysis of manual tracking in monkeys: Characterization of movement intermittencies during a circular tracking task. J. Neurophysiol. 91, 901–911 (2004).
Taylor, J. A., Krakauer, J. W. & Ivry, R. B. Explicit and implicit contributions to learning in a sensorimotor adaptation task. J. Neurosci. 34, 3023–3032 (2014).
Engel, K. C. & Soechting, J. F. Manual tracking in two dimensions. J. Neurophysiol. 83, 3483–3496 (2000).
Roitman, A. V., Pasalar, S., Johnson, M. T. V. & Ebner, T. J. Position, direction of movement, and speed tuning of cerebellar purkinje cells during circular manual tracking in monkey. J Neurosci. 25, 9244–9257 (2005).
Roitman, A. V., Pasalar, S. & Ebner, T. J. Single trial coupling of purkinje cell activity to speed and error signals during circular manual tracking. Exp Brain Res. 92, 241–251 (2009).
Hayashi, Y., Tamura, Y., Sase, K., Sugawara, K. & Sawada, Y. Intermittently-visual tracking experiments reveal the roles of error-correction and predictive mechanisms in the human visual-motor control system. Trans. SICE. 46, 391–400 (2010).
Daga, F. B. et al. Wayfinding and glaucoma: A virtual reality experiment. Investig. Ophthalmol. Vis. Sci. 58, 3343 (2017).
Cho, S. et al. Development of virtual reality proprioceptive rehabilitation system for stroke patients. Comput. Methods Programs Biomed. 113, 258–265 (2014).
Anglin, J. M., Sugiyama, T. & Liew, S. L. Visuomotor adaptation in head-mounted virtual reality versus conventional training. Sci. Reports 45469 (2017).
Melmoth, D. R. & Grant, S. Advantages of binocular vision for the control of reaching and grasping. Exp. Brain Res. 171, 371–388 (2006).
Loftus, A., Servos, P., Goodale, M. A., Mendarozqueta, N. & Mon-Williams, M. When two eyes are better than one in prehension: monocular viewing and end-point variance. Exp. Brain Res. 158, 317–327 (2004).
Melmoth, D. R., Finlay, A. L., Morgan, M. J. & Grant, S. Grasping deficits and adaptations in adults with stereo vision losses. Investig. Ophthalmol. Vis. Sci. 50, 3711–3720 (2009).
Servos, P. Distance estimation in the visual and visuomotor systems. Investig. Ophthalmol. Vis. Sci. 130, 35–47 (2000).
McKee, S. P. & Taylor, D. G. The precision of binocular and monocular depth judgments in natural settings. J. Vis. 10, 5 (2010).
Gillam, B. & Borsting, E. The role of monocular regions in stereoscopic displays. Percept. 17, 603–608 (1988).
Haggard, P., Newman, C., Blundell, J. & Andrew, H. The perceived position of the hand in space. Percept Psychophys 62, 363–377 (2000).
Servos, P. The role of binocular vision in prehension: a kinematic analysis. Vis. Res. 32, 1513–1521 (1992).
Gross, D. P. et al. Development of a computer-based clinical decision support tool for selecting appropriate rehabilitation interventions for injured workers. J. Occup. Rehabil. 23, 597–609 (2013).
Ma, K. & Hommel, B. Body-ownership for actively operated non-corporeal objects. Conscious. Cogn. 36, 75–86 (2015).
Maravita, A. & Iriki, A. Tools for the body (schema). Trends Cogn. Sci. 8, 79–86 (2004).
Iriki, A., Tanaka, M., Obayashi, S. & Iwamura, Y. Self-images in the video monitor coded by monkey intraparietal neurons. Neurosci. Res. 40, 163–173 (2001).
Lloyd, D. M. Spatial limits on referred touch to an alien limb may reflect boundaries of visuo-tactile peripersonal space surrounding the hand. Brain Cogn. 64, 104–109 (2007).
Holst, E. V. & Mittelstaedt, H. The reafference principle. Naturwissenschaften 37, 464–467 (1950).
Choi, W., Li, L., Satoh, S. & Hachimura, K. Multisensory integration in the virtual hand illusion with active movement. BioMed research international 2016, 1–9 (2016).
Mastmeyer, A., Hecht, T., Fortmeier, D. & Handels, H. Ray-casting based evaluation framework for haptic force feedback during percutaneous transhepatic catheter drainage punctures. Int. J. Comput. Assist. Radiol. Surg. 9, 421–431 (2014).
Mastmeyer, A., Fortmeier, D. & Handels, H. Evaluation of direct haptic 4d volume rendering of partially segmented data for liver puncture simulation. Sci. Reports 7, 1–15 (2017).
Nagaoka, M. & Tanaka, R. Contribution of kinesthesia on human visuomotor elbow tracking movements. Neurosci. Lett. 26, 245–249 (1981).
J. McC. Foulkes, A. & Miall, R. C. Adaptation to visual feedback delays in a human manual tracking task. Exp. Brain Res. 131, 101–110 (2000).
Reed, D. W., Liu, X. & Miall, R. On-line feedback control of human visually guided slow ramp tracking: effects of spatial separation of visual cues. Neurosci. Lett. 338, 209–212 (2003).
Lee, J., Kagamihara, Y., Tomatsu, S. & Kakei, S. The functional role of the cerebellum in visually guided tracking movement. Cerebellum 11, 426–433 (2012).
Lee, J., Kagamihara, Y. & Kakei, S. A new method for functional evaluation of motor commands in patients with cerebellar ataxia. PLoS One 10, e0132983 (2015).
Hayashi, Y., Tamura, Y., Sugawara, K. & Sawada, Y. Roll of the rhythmic component in the proactive control of a human hand. Artif. Life Robotics 14, 164–167 (2015).
Ao, D., Song, R. & Tong, K. Y. Sensorimotor control of tracking movements at various speeds for stroke patients as well as age-matched and young healthy subjects. PLoS One 10, e0128328 (2015).
Kreylos, O. Lighthouse tracking examined (http://doc-ok.org/?p=1478, 2016).
Author information
Authors and Affiliations
Contributions
W.C. Conceived and designed the experiments, Performed the experiments, Analyzed the data and Wrote the paper. J.L. Conceived and designed the experiments, Performed the experiments, Analyzed the data and Wrote the paper. N.Y. Conceived and designed the experiments. L.L. Wrote the paper. J.K. Conceived and designed the experiments
Corresponding author
Ethics declarations
Competing Interests
The authors declare no competing interests.
Additional information
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Choi, W., Lee, J., Yanagihara, N. et al. Development of a quantitative evaluation system for visuo-motor control in three-dimensional virtual reality space. Sci Rep 8, 13439 (2018). https://doi.org/10.1038/s41598-018-31758-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-018-31758-y