[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110786825A - Spatial perception detuning training system based on virtual reality visual and auditory pathway - Google Patents

Spatial perception detuning training system based on virtual reality visual and auditory pathway Download PDF

Info

Publication number
CN110786825A
CN110786825A CN201910944581.7A CN201910944581A CN110786825A CN 110786825 A CN110786825 A CN 110786825A CN 201910944581 A CN201910944581 A CN 201910944581A CN 110786825 A CN110786825 A CN 110786825A
Authority
CN
China
Prior art keywords
module
spatial
visual
auditory
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910944581.7A
Other languages
Chinese (zh)
Other versions
CN110786825B (en
Inventor
秦璐
王索刚
李伟宽
刘洛希
张重阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Fanju Science & Technology Co ltd
Original Assignee
Zhejiang Fanju Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Fanju Science & Technology Co ltd filed Critical Zhejiang Fanju Science & Technology Co ltd
Priority to CN201910944581.7A priority Critical patent/CN110786825B/en
Publication of CN110786825A publication Critical patent/CN110786825A/en
Application granted granted Critical
Publication of CN110786825B publication Critical patent/CN110786825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4005Detecting, measuring or recording for evaluating the nervous system for evaluating the sensory system

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Neurosurgery (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Neurology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physiology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention provides a virtual reality visual and auditory pathway-based spatial perception disorder training system, which comprises a power supply module, a main control module, a spatial motion data processing module, a spatial perception data analysis module, a visual and auditory task presenting module, a reference database module and a report generation module, wherein the spatial motion data processing module is connected with a hand spatial position acquisition module and a foot spatial position acquisition module. The invention measures and collects the operation parameters of the hand and the foot responding to the relevant spatial perception task through the visual and auditory instructions sent by the system and the hand/foot spatial position collection module, thereby checking the accuracy and the fineness of the motion of the hand and the foot of the user.

Description

Spatial perception detuning training system based on virtual reality visual and auditory pathway
Technical Field
The invention belongs to the technical field of attention training, and particularly relates to a spatial perception disorder training system based on a virtual reality visual-auditory pathway.
Background
The spatial perception comprises aspects of distance perception, orientation perception and the like, and is the embodiment of brain perception function, and the spatial perception disorder is one of brain dysfunction. Teenagers form shape perception through eyeball movement, and form perception of size and distance through line perspective, perspective of air, light and shade, motion parallax and the like. Without spatial perception, the brain fails to respond to spatial perception, which is not sensitive. The disorder has inherent factors, but is mainly caused by environment and human factors, and along with the development of mobile networks and mobile phone technologies in China. Many children now enjoy playing mobile phones, tablets and watching television. The electronic screen is colorful and attractive, a child can be immobilized for one or two hours, and the visual spatial perception development of the child is influenced after a long time, so that the visual spatial perception of the child is weak. In addition, China develops rapidly, talents compete violently, many children are in the young period, parents require the children to learn some competitive knowledge, such as poetry of Tang, painting, mathematics and the like, and the movement state of the children's body is still. Therefore, problems such as getting lost, frequently making radical mistakes when writing and reading, reversing numbers, or missing characters can occur in a considerable proportion of children in the ages of six to twelve. Such problems are likely to lead to poor learning performance of children, lack of concentration on learning tasks, low efficiency of lecture listening, low performance, careless tiger, and operation dragging, and in the past, the children are increasingly lack of confidence and are likely to rely on others. Therefore, the test on the spatial perception disorder is beneficial to parents and teachers to know the spatial perception level of the children, intervention and training can be carried out on the spatial perception disorder children, or an adaptive education and teaching method can be carried out, and the test is beneficial to better caring about the children and caring about the growth of the children.
The current techniques for spatial perception disorders are not uncommon, and mainly training and testing for sensory integration disorders. Such as a sensory integration training room for sensory integration disorder children using a novel patent [ application No.: CN202324705U ], comprising a series of physical training devices capable of sensory integration training children. But a large training field is needed, equipment needs to be maintained regularly, a trainer needs to monitor in real time, and safety of children is guaranteed. In addition, a baton dedicated for use by children with sensory integration disorder as in the new patent application No. [ application No.: CN203480724U ], used for training the large and small muscles, the sense of balance and vestibular sensation, and training the command of the children with sensory integration disorder. The method can unify the sense of exercise and the movement to a certain extent, but the effect of the training cannot be objectively and quantitatively evaluated. The invention also discloses a children sensory integration training device which is mainly designed in the methods of patent patents CN1506128A and CN1506129A and can be used for preventing and treating sensory integration disorder. The method can enhance the perception of the sensory channel of the child to a certain extent, but the effect of the evaluation training is still partially objective and quantized, the equipment belongs to mechanical equipment, and has higher rotating parts, and if the equipment is not carefully maintained or does not have the guidance and monitoring of a trainer, the equipment has certain potential safety hazards of the child.
Disclosure of Invention
The invention aims to solve the problems and provides a spatial perception disorder training system based on a virtual reality visual-auditory pathway;
in order to achieve the purpose, the invention adopts the following technical scheme:
a virtual reality visual and auditory pathway-based spatial perception disorder training system comprises a power supply module, a main control module, and a spatial motion data processing module, a spatial perception data analysis module, a visual and auditory task presentation module, a reference database module and a report generation module which are connected with the main control module, wherein the spatial motion data processing module is connected with a hand spatial position acquisition module and a foot spatial position acquisition module,
the hand space position acquisition module is used for acquiring motion parameters of the hand;
the foot space position acquisition module is used for acquiring the motion parameters of the foot;
the visual and auditory task presentation module is used for presenting the immersive visual information and/or auditory information according to the command of the main control module;
the reference database module is used for storing reference data;
the spatial perception data analysis module is used for analyzing the test result according to the corresponding reference data in the reference database module;
and the report generating module is used for generating a corresponding test report according to the analysis result of the spatial perception data analysis module.
In the above system for training spatial perceptual-auditory-pathway-based spatial perceptual-misregistration, the following steps are included:
s1, wearing a visual and auditory task presentation module on a user, and respectively wearing a hand space position acquisition module and a foot space position acquisition module on the hand and the foot of the user;
s2, providing a virtual reality scene, and performing at least one visual hand/foot space motion test and auditory hand/foot space motion test on a user in the virtual reality scene;
and S3, analyzing the test result of the user by the spatial perception data analysis module.
In the above-mentioned spatial perceptual-auditory-pathway-based spatial perceptual-misregistration training system, in step S2, a visual hand/foot spatial motion test is performed on the user through the following virtual reality scenarios:
and prompting a user to complete the space motion of the appointed path through a corresponding hand/foot control target object in a text mode, and recording the path length of the space motion completion and the space motion control completion time.
In the above-described spatial perceptual-auditory-pathway-based spatial perceptual-misregistration training system, in step S2, an auditory hand/foot spatial motion test is performed on the user through the following virtual reality scenarios:
and prompting a user to complete the space movement specified by voice through a corresponding hand/foot control target object in a voice mode, and recording the actual movement path length and the instruction path length of the space.
In the above-mentioned spatial perception auditory pathway-based spatial perception detuning training system, in step S3, the spatial perception data analysis module analyzes the test result by the following method:
s31, obtaining a visual space motion operation approximation degree through a visual space motion operation approximation degree calculation method, and obtaining an auditory space motion operation approximation degree through an auditory space motion operation approximation degree calculation method;
s32, obtaining a spatial perception comprehensive quotient according to the visual space motion operation approximation degree and the auditory space motion operation approximation degree space through a perception comprehensive quotient calculation method;
s33, obtaining a standardized quotient value according to the spatial perception comprehensive quotient value and reference data;
and S34, judging the spatial perception disorder test result of the user according to the standardized quotient.
In the above-mentioned spatial perceptual-auditory-pathway-based spatial perceptual-misregistration training system, in step S31, the visual-spatial-motion-operation-approximation calculation method includes the following formula:
a ═ S1/T1+ S2/T2+ … + Sn/Tn)/n, where
A, representing the operation approximation degree of visual space motion;
s1 and S2 … Sn which indicate the path length of the completion of the spatial movement of each test;
t1, T2 and Tn, which represent the completion time of the spatial motion control of each test;
n represents the number of tests.
In the above-mentioned spatial perceptual-auditory-pathway-based spatial perceptual-misregistration training system, in step S31, the auditory spatial motion manipulation approximation calculation method includes the following formula:
b ═ C (C1/D1+ C2/D2+ … + Cn/Dn)/n, where
B, representing the auditory space motion operation approximation degree;
c1 and C2 … Cn which indicate the actual motion path length of the space of each test;
d1 and D2 … Dn, which represent the instruction path length of each test;
n represents the number of tests.
In the above-mentioned spatial perception auditory pathway-based spatial perception disorder training system, in step S32, the spatial perception synthetic quotient calculation method includes the following formula:
e is Ax + By, wherein
E, representing the spatial perception comprehensive quotient of the user;
x, weight representing the approximation of the visual space motion operation;
y, weight representing the auditory spatial motion manipulation approximation;
in step S33, the normalized quotient is obtained by the following equation:
f ═ 100+ (E-G), where,
f, representing the standardized quotient of the user;
and G, representing a reference spatial perception synthetic quotient value in the reference data.
In the above-mentioned system for measuring and training spatial perception disorders based on virtual reality audiovisual channels, the system further comprises a spatial perception training scheme generation module and a spatial perception training process control module connected to the main control module, wherein,
the spatial perception training scheme generating module is used for providing training tasks of corresponding grades for a user according to the analysis result of the spatial perception data analyzing module;
and the spatial perception training process control module is used for realizing the storage of a training scheme of a user, the recording of the development condition of the scheme and the recording of historical performances of the finished training.
In the above system for training spatial perceptual-auditory-pathway-based spatial perceptual-dysregulation, the system further comprises the following training steps:
A. providing training tasks of corresponding grades for a user according to the analysis result of the spatial perception data analysis module;
B. step S2 is performed in a training manner under the corresponding training task.
In the above system for measuring and training spatial perception auditory pathway based on virtual reality, the foot spatial position acquisition module and the hand spatial position acquisition module both include a wearing ring and a mounting box fixed on the wearing ring, and the mounting box has a six-axis/nine-axis sensor therein, and the six-axis/nine-axis sensor is connected to the main control module in a wired and/or wireless manner.
In the above system for measuring and training spatial perception disorders based on virtual reality audiovisual pathways, the audiovisual task presentation module includes a virtual reality head-wearing module and a high-fidelity earphone, which are respectively connected to the main control module, and the high-fidelity earphone is integrated on the virtual reality head-wearing module.
Compared with the prior art, the invention has the advantages that the visual and auditory instructions sent by the system are used, and the hand/foot spatial position acquisition module is used for measuring and acquiring the operation parameters of the hand and foot response relative spatial perception tasks, so that the accuracy and the fineness of the motion of the hand and the foot of the user are checked; after the test, the system automatically solves various parameters related to the spatial perception and compares the parameters with the reference data of the same sex and the same age group, thereby comprehensively testing the spatial perception level of the user; and generating a personalized spatial perception training scheme according to the test result, and automatically and intelligently leading a user to train so as to effectively enhance the spatial perception capability.
Drawings
Fig. 1 is a schematic structural diagram of a spatial perceptual-auditory-pathway-based spatial perceptual-detuning training system according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a visual-auditory task presentation module according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a hand/foot spatial position acquisition module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a visual hand space exercise test/training system according to an embodiment of the present invention;
FIG. 5 is a schematic view of a visual foot spatial movement test/training system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an auditory hand space movement test/training provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of an auditory foot spatial movement test/training provided by an embodiment of the present invention;
FIG. 8 is a flowchart of a visual-auditory spatial perception test provided by an embodiment of the present invention;
fig. 9 is a flowchart of the audiovisual spatial perception training provided by the embodiment of the invention.
Reference numerals: a power supply module 1; a main control module 2; a hand spatial position acquisition module 3; a foot space position acquisition module 4; a spatial motion data processing module 5; a visual-auditory task presentation module 6; a virtual reality head-mounted module 61; a high fidelity headset 62; a reference database module 7; a report generation module 8; a spatial perception data analysis module 9; a spatial perception training scheme generating module 11; the spatial perception training process control module 12.
Detailed Description
In the mechanism of information processing, human beings mainly perceive the world by using visual, auditory, tactile, olfactory and other pathways. Where the visual-auditory pathway accepts and perceives information at approximately 94%. Therefore, the visual sense and the auditory sense are main information processing channels of human beings, and related researches of brain science consider that the visual sense and the auditory sense functions are not independent, the visual sense and the auditory sense functions of healthy people are mutually related, the information processing channels are divided into visual sense and auditory sense single channel processing or visual sense and auditory sense mixed double channel processing in terms of the form, and the information processing is mainly embodied in the form of the three visual sense and auditory sense channels. The spatial perceptual disorder is mainly abnormal in the information processing of vision and hearing, so the test of the scheme starts from the three sensory paths.
Secondly, in the aspect of an operation control mechanism, the accurate characteristic of limb space movement can reflect the level of space perception capability, and the measurement, comparison and analysis of the preset path in the space movement task can reveal the space perception problem. The scheme utilizes a hand and foot fine operation control mode to objectively embody the spatial perception level by using quantization parameters.
The following are preferred embodiments of the present invention and are further described with reference to the accompanying drawings, but the present invention is not limited to these embodiments.
As shown in fig. 1, this embodiment provides a spatial perception disorder training system based on virtual reality audiovisual pathway, which includes a power supply module 1, a main control module 2, and a spatial motion data processing module 5, a spatial perception data analyzing module 9, a audiovisual task presenting module 6, a reference database module 7, and a report generating module 8 connected to the main control module 2, where the spatial motion data processing module 5 is connected to a hand spatial position collecting module 3 and a foot spatial position collecting module 4,
the power supply module 1 is mainly used for supplying power to each module and providing a 3.3-5V direct current power supply, and the specific form can be that a USB port in the main control module supplies power or an external direct current power supply;
and (3) the main control module 2: the system is the core of the whole system, can be a desktop computer host, a notebook computer and the like, and mainly completes the operations of visual and auditory task flow control, visual and auditory task presentation control, reference database module 7 access control, spatial perception data analysis module 9 control, report generation module control and the like.
The hand space position acquisition module 3 is used for acquiring motion parameters of the hand;
the foot space position acquisition module 4 is used for acquiring the motion parameters of the foot;
the visual and auditory task presentation module 6 is used for presenting immersive visual information and/or auditory information according to the command of the main control module 2;
and the reference database module 7 is used for storing reference data, wherein the reference data comprises reference vision/hearing hand space motion operation approximation degrees according to normal crowd statistics, reference vision/hearing foot space motion operation approximation degrees, reference spatial perception comprehensive quotient values, quotient standard difference values and the like. And different genders and age groups have respective reference data, e.g., one statistical data segment per year from 6 to 18 years of age; two years from age 19 to age 24 as a statistical data segment; every five years from 25 to 50 years is a statistical data segment; one statistical data segment from age 51 to age 60; one statistical data segment from age 61 and above.
The spatial perception data analysis module 9 is used for analyzing the test result according to the corresponding reference data in the reference database module 7;
the spatial motion data processing module 5 is configured to perform kalman filtering processing on the data acquired by the hand spatial position acquisition module 3 and the foot spatial position acquisition module 4, and may be, for example: arduino is based on Mega2560, mini or Nano architecture; or the stm32 singlechip is used for removing interference and noise signals in the acquisition process, and the processed program data is sent to the main control module 2 through a USB data line or a wireless Bluetooth protocol and the like.
And the report generating module 8 is used for displaying the corresponding test report in a chart, a text form, a word or a PDF document according to a certain image-text structure according to the analysis result of the spatial perception data analysis module 9.
Further, the embodiment further includes a spatial perception training scheme generating module 11 and a spatial perception training process control module 12, which are connected to the main control module 2, wherein,
the spatial perception training scheme generating module 11 is used for providing training tasks of corresponding grades for a user according to the analysis result of the spatial perception data analyzing module 9;
and the spatial perception training process control module 12 is used for realizing the storage of a training scheme of a user, the recording of the development condition of the scheme, the recording and query of historical performances after training and the like.
Further, as shown in fig. 2, the visual and auditory task presentation module 6 includes a virtual reality headset module 61 and a high-fidelity headset 62 respectively connected to the main control module 2, and the high-fidelity headset 62 is integrated on the virtual reality headset module 61. The input of the hi-fi headset 62 is connected to the main control module 2. The virtual reality head-mounted module 61 is a virtual reality device capable of being worn on the head, such as a desktop-level HTC view series head, and an Oculus system head; devices such as a moving-level bird watching device for presenting immersive visual information according to commands of the main control module 2; a high fidelity headset 62 for immersive auditory information presentation in accordance with commands from the master control module 2.
Specifically, as shown in fig. 3, each of the foot spatial position acquisition module 4 and the hand spatial position acquisition module 3 includes a wearable ring and a mounting box fixed on the wearable ring, the mounting box may be a light plastic raw box or a square box, and the mounting box has a six-axis/nine-axis sensor therein, and the six-axis/nine-axis sensor is connected to the main control module 2 in a wired and/or wireless manner. The foot spatial position acquisition module 4 and the hand spatial position acquisition module 3 can be tied up at user's hand or foot, and the core component is six/nine sensors, and the six sensors are adopted to this embodiment, including triaxial acceleration and triaxial gyroscope for carry out the record with the spatial motion parameter of corresponding hand or foot. Can have the data line interface on the mounting box in order to link to each other six sensors of inside with host system 2, perhaps, set up wireless module in the mounting box, realize six sensors and host system 2's information transmission through wireless module, carry out information transmission's concrete mode through wireless module between six sensors and the host system and do not give unnecessary details here, can adopt modes such as bluetooth, RFID or wifi.
Specifically, the testing method of the system when the system is put into use comprises the following steps:
s1, wearing a visual and auditory task presentation module 6 on a user, and respectively wearing a hand space position acquisition module 3 and a foot space position acquisition module 4 on the hand and the foot of the user;
s2, providing a virtual reality scene, and performing at least one visual hand/foot space motion test and auditory hand/foot space motion test on a user in the virtual reality scene;
and S3, analyzing the test result of the user by the spatial perception data analysis module 9.
Specifically, in step S2, the user is subjected to a visual hand/foot spatial motion test through the following virtual reality scenarios:
and prompting a user to complete the space motion of the appointed path through a corresponding hand/foot control target object in a text mode, and recording the path length of the space motion completion and the space motion control completion time.
Likewise, in step S2, the user is tested for auditory hand/foot spatial motion through the following virtual reality scenario:
and prompting a user to complete the space movement specified by voice through a corresponding hand/foot control target object in a voice mode, and recording the actual movement path length and the instruction path length of the space.
Further, in step S3, the spatial perception data analysis module 9 analyzes the test result by:
s31, obtaining a visual space motion operation approximation degree through a visual space motion operation approximation degree calculation method, and obtaining an auditory space motion operation approximation degree through an auditory space motion operation approximation degree calculation method;
s32, obtaining a spatial perception comprehensive quotient according to the visual space motion operation approximation degree and the auditory space motion operation approximation degree space through a perception comprehensive quotient calculation method;
s33, obtaining a standardized quotient value according to the spatial perception comprehensive quotient value and reference data;
and S34, judging the spatial perception disorder test result of the user according to the standardized quotient.
Specifically, in step S31, the visual space motion operation approximation calculation method includes the following formula:
a is S1/T1+ S2/T2+ … + Sn/Tn/n, wherein
A, representing the operation approximation degree of visual space motion;
s1 and S2 … Sn which indicate the path length of the completion of the spatial movement of each test;
t1, T2 and Tn, which represent the completion time of the spatial motion control of each test;
n represents the number of tests.
Likewise, in step S31, the auditory space motion operation approximation calculation method includes the following formula:
b ═ C1/D1+ C2/D2+ … + Cn/Dn/n, where
B, representing the auditory space motion operation approximation degree;
c1 and C2 … Cn which indicate the actual motion path length of the space of each test;
d1 and D2 … Dn, which represent the instruction path length of each test;
n represents the number of tests.
Further, in step S32, the spatial perception synthetic quotient calculation method includes the following formula:
e is Ax + By, wherein
E, representing the spatial perception comprehensive quotient of the user;
x, weight representing the approximation of the visual space motion operation;
y, weight representing the auditory spatial motion manipulation approximation;
and in step S33, the normalized quotient is obtained by the following equation:
f-100 + E-G, wherein,
f, representing the standardized quotient of the user;
and G, representing a reference spatial perception synthetic quotient value in the reference data.
Wherein G is a reference spatial perception comprehensive quotient of the group under the same age group as the user; the weight is determined as the case may be, and the sum of the two weights is 1, where x is 0.5 and y is 0.5.
In step S34, if the user' S normalized quotient is 80-89 points, it indicates that the user score is below the average score in the group, and it is marked as poor; scores are stated to be close to the average score in 90-109, and are recorded as normal, scores 110-119 are stated to be higher than the average score and are recorded as good, scores 120-129 are stated to be higher than the average score and are recorded as excellent, scores 130 are above the average score and are recorded as supergroup.
Further, in step S33, the normalized quotient can be obtained by: normalized quotient 100+15 (user score-reference mean)/standard deviation. Similarly, if the normalized quotient is 80-89 points, it means that the user score is below the average score in the population, it is marked as poor, 90-109 points it is marked as average score, it is marked as normal, 119 points it is higher than the average score, it is marked as good, 120 points it is marked as high, 130 points it is over high, it is marked as super group.
Further, the system of the present embodiment further includes the following training steps:
A. providing training tasks of corresponding grades for a user according to the analysis result of the spatial perception data analysis module 9;
B. step S2 is performed in a training mode under the corresponding training task, i.e.
And S2, providing a virtual reality scene, and performing at least one time of visual hand/foot space motion test and auditory hand/foot space motion training on the user in the virtual reality scene according to the training task.
Of course, step S1 is also included in the training process, and is not repeated here since step S1 has already been performed in the testing process.
In addition, the training process may also include a training mode performing step S3, that is,
and S3, analyzing the training result of the user by the spatial perception data analysis module 9. The analysis of the training results may give the judgment results of 'poor', 'normal', 'good', 'excellent' or 'supergroup' per training result as well as the analysis of the test results. The evaluation step can be replaced by the training effect, namely the progress degree of each training result relative to the test result is given, and the spatial perception comprehensive quotient of the user can be directly given and is judged by the user.
The following detailed examples of the test procedures for each site:
1) as shown in fig. 4, visual hand space movement testing/training:
the two hands of the user are respectively tied with a hand space position acquisition module 3.
The virtual reality head-mounted module 61 creates an open visual environment with a simple abstract ground and sky, and the user is in a standing state. The text for ' right hand test/training ' appears in the user's field of view and then disappears. The method comprises the steps that a three-dimensional ball appears right in front of a user, the user feels a distance of 5-10 meters, the ball moves at a constant speed in front of the user and draws a motion track, the track can be kept right in front of the user, at the moment, the small ball appears at the starting point of the motion track, the operation task of the user is that the right hand bound with a hand space position acquisition module 3 points to the small ball, the point in a virtual reality environment shows blue light, if the blue light points to the small ball, testing/training is started, arm motion is controlled to enable the small ball controlled by the blue light to move accurately along the track as far as possible, when the small ball reaches the track end point and keeps at least 100 milliseconds, and the next operation is started. If the user does not make the small ball reach the end point within 10 seconds or does not make any small ball move within 3 seconds, the virtual reality environment prompts that the test/training is invalid, and the next operation is started. Right hand testing/training is followed by left hand testing/training. The text for ' left hand test/training ' appears in the user's field of view and then disappears. A three-dimensional sphere appears right ahead of a user, feels a distance of 5-10 meters, moves at a constant speed in front of the user and draws a motion track, and the track can be kept right ahead of the user. At this time, the small ball appears at the starting point of the motion trail. The user's operation task is, with the left hand that is bound hand spatial position collection module 3, to the bobble, point to present in the virtual reality environment and show as a blue light, if the bobble in the blue light indicates, test/training begins, and the motion of control arm makes the accurate motion of bobble along the orbit of blue light control as far as possible. When the bead reaches the end of the trajectory, it is held for at least 100 milliseconds. And entering the next operation. If the user does not make the small ball reach the end point within 10 seconds or does not make any small ball move within 3 seconds, the virtual reality environment prompts that the test/training is invalid, and the next operation is started. The movement track of the small balls is different from one small ball to another.
2) As shown in fig. 5, visual foot spatial movement testing/training:
the user ties one foot spatial position acquisition module 4 to each foot.
The virtual reality head-mounted module 61 creates an open visual environment with a simple abstract ground and sky, and the user is in a standing state. Two selectable different colored beads, such as ' red beads ' or ' black beads ', appear in sequence on the floor of the user's field of vision, only one at a time, and are random. The ground moves at a constant speed in front of the user and draws a motion track, and the track can be kept on the ground. At this time, the small ball appears at the starting point of the motion trail. In the virtual reality environment, the right foot of a user is represented as a red square, the left foot of the user is represented as a black square, the size of the left foot is a circumscribed square of a small ball, when a 'red ball' appears, the user steps on the right foot to drag the small ball to move along a motion track, and when the small ball reaches the end point of the track, the small ball is kept for at least 500 milliseconds. And entering the next operation. If the user does not make the small ball reach the end point within 10 seconds or does not make any small ball move within 3 seconds, the virtual reality environment prompts that the test/training is invalid, and the next operation is started. When a 'black ball' appears, the user steps on the left foot to drag the ball along the motion trajectory, and when the ball reaches the end of the trajectory, the ball is maintained for at least 500 milliseconds. And entering the next operation. If the user does not make the small ball reach the end point within 10 seconds or does not make any small ball move within 3 seconds, the virtual reality environment prompts that the test/training is invalid, and the next operation is started.
3) As shown in fig. 6, auditory hand space movement testing/training:
the user ties up a hand spatial position acquisition module 3 with both arms respectively.
The virtual reality head-mounted module 61 creates an open visual environment with a simple abstract ground and sky, and the user is in a standing state. A scale appears in the field of view to alert the user to the length of such line segments. The text for ' right hand test/training ' appears in the user's field of view and then disappears. If the small ball is pointed by the blue light, the test/training is started, and the arm motion is controlled to enable the small ball controlled by the blue light to operate according to the voice instruction as far as possible. The ball is stopped after a controlled movement for a distance and is maintained for at least 100 milliseconds. And entering the next operation. If the user does not make the small ball reach the end point within 10 seconds or does not make any small ball move within 3 seconds, the virtual reality environment prompts that the test/training is invalid, and the next operation is started. Right hand testing/training is followed by left hand testing/training. A scale appears in the field of view to alert the user to the length of such line segments. The text for ' left hand test/training ' then appears in the user's field of view and then disappears. If the small ball is pointed by the blue light, the test/training is started, and the arm motion is controlled to enable the small ball controlled by the blue light to operate according to the voice instruction as far as possible. The ball is stopped after a controlled movement for a distance and is maintained for at least 100 milliseconds. And entering the next operation. If the user does not make the small ball reach the end point within 10 seconds or does not make any small ball move within 3 seconds, the virtual reality environment prompts that the test/training is invalid, and the next operation is started. The movement track of the small balls is different from one small ball to another.
4) As shown in fig. 7, auditory foot spatial movement testing/training:
the user ties one foot spatial position acquisition module 4 to each foot.
The virtual reality head-mounted module 61 creates an open visual environment with a simple abstract ground and sky, and the user is in a standing state. A scale appears in the field of view to alert the user to the length of such line segments. The text for ' right foot test/training ' appears in the user's field of view and then disappears. At random locations on the ground within the user's field of view. A ball, a high-fidelity earphone, plays instructions, such as 'moving the ball 5 meters forward vertically from the position', the right foot of the user is represented as a red square in the virtual reality environment, the user steps on the ball, drags the ball to a position, keeps for at least 500 milliseconds, and enters the next operation. If the user does not make the small ball reach the end point within 10 seconds or does not make any small ball move within 3 seconds, the virtual reality environment prompts that the test/training is invalid, and the next operation is started. Right foot testing/training is followed by left foot testing/training. A scale appears in the field of view to alert the user to the length of such line segments. The text for ' left foot test/training ' then appears in the user's field of view and then disappears. At random locations on the ground within the user's field of view. Playing a command in a ball and a high-fidelity earphone, if 'moving the ball vertically and forwards for 3 meters', in a virtual reality environment, the left foot of a user is represented as a black square, stepping on the ball, dragging the ball to a position, keeping for at least 500 milliseconds, and entering the next operation. If the user does not make the small ball reach the end point within 10 seconds or does not make any small ball move within 3 seconds, the virtual reality environment prompts that the test/training is invalid, and the next operation is started.
As shown in fig. 8, a round of testing may include a plurality of testing times, for example, 5 times, including 5 visual hand space motion tests, 5 visual foot space motion tests, 5 auditory hand space motion tests, 5 auditory foot space motion tests, and 20 tests. A spatial perceptual disorder test may comprise one, two or more rounds.
As shown in fig. 9, in step a, the perceptual space training process divides the pre-stored training tasks into five schemes of 100,80,60,40, and 20 according to the analysis results of the spatial perceptual data analysis module 9, such as 'poor', 'normal', 'good', 'excellent', and 'super group'. Each session comprised 4 bars of training items, one visual training sub-item and one auditory training sub-item, arranged in "audiovisual" order, 10 minutes per bar, with a rest of about 5 minutes in the middle of each bar, and a training time of about 1 hour. Of course, in practical applications, the level of the analysis result, the scheme classification, the training items of each training, the duration of each bar and the rest time between bars can be adjusted according to practical situations.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Although the power supply module 1 is used more herein; a main control module 2; a hand spatial position acquisition module 3; a foot space position acquisition module 4; a spatial motion data processing module 5; a visual-auditory task presentation module 6; a virtual reality head-mounted module 61; a high fidelity headset 62; a reference database module 7; a report generation module 8; a spatial perception data analysis module 9; a spatial perception training scheme generating module 11; spatial perception training process control module 12, etc., but does not exclude the possibility of using other terms. These terms are used merely to more conveniently describe and explain the nature of the present invention; they are to be construed as being without limitation to any additional limitations that may be imposed by the spirit of the present invention.

Claims (10)

1. A virtual reality visual and auditory pathway-based spatial perception disorder training system is characterized by comprising a power supply module (1), a main control module (2), and a spatial motion data processing module (5), a spatial perception data analysis module (9), a visual and auditory task presenting module (6), a reference database module (7) and a report generation module (8) which are connected with the main control module (2), wherein the spatial motion data processing module (5) is connected with a hand spatial position acquisition module (3) and a foot spatial position acquisition module (4),
the hand space position acquisition module (3) is used for acquiring motion parameters of the hand;
the foot space position acquisition module (4) is used for acquiring the motion parameters of the foot;
the visual and auditory task presentation module (6) is used for presenting immersive visual information and/or auditory information according to the command of the main control module (2);
a reference database module (7) for storing reference data;
the spatial perception data analysis module (9) is used for analyzing the test result according to the corresponding reference data in the reference database module (7);
and the report generating module (8) is used for generating a corresponding test report according to the analysis result of the spatial perception data analysis module (9).
2. The virtual reality visual-auditory pathway-based spatial perceptual misregistration training system according to claim 1, comprising the following test steps:
s1, wearing a visual and auditory task presentation module (6) on a user, and respectively wearing a hand space position acquisition module (3) and a foot space position acquisition module (4) on the hand and the foot of the user;
s2, providing a virtual reality scene, and performing at least one visual hand/foot space motion test and auditory hand/foot space motion test on a user in the virtual reality scene;
and S3, analyzing the test result of the user by a spatial perception data analysis module (9).
3. The virtual reality visual-auditory pathway-based spatial perception imbalance training system according to claim 2, wherein in step S2, the user is tested for visual hand/foot spatial movement through the following virtual reality scenarios:
prompting a user to complete the space movement of the designated path through a corresponding hand/foot control target object in a text mode, and recording the path length of the space movement completion and the time for completing the space movement control;
the auditory hand/foot spatial motion test is performed on the user through the following virtual reality scenarios:
and prompting a user to complete the space movement specified by voice through a corresponding hand/foot control target object in a voice mode, and recording the actual movement path length and the instruction path length of the space.
4. The virtual reality visual-auditory pathway-based spatial perception imbalance training system according to claim 3, wherein in step S3, the spatial perception data analysis module (9) analyzes the test result by:
s31, obtaining a visual space motion operation approximation degree through a visual space motion operation approximation degree calculation method, and obtaining an auditory space motion operation approximation degree through an auditory space motion operation approximation degree calculation method;
s32, obtaining a spatial perception comprehensive quotient according to the visual space motion operation approximation degree and the auditory space motion operation approximation degree space through a perception comprehensive quotient calculation method;
s33, obtaining a standardized quotient value according to the spatial perception comprehensive quotient value and reference data;
and S34, judging the spatial perception disorder test result of the user according to the standardized quotient.
5. The virtual reality optoacoustic path-based spatial perceptual misregistration training system of claim 4, wherein in step S31, the method for calculating the operation approximation of the visual spatial motion comprises the following formula:
a ═ S1/T1+ S2/T2+ … + Sn/Tn)/n, where
A, representing the operation approximation degree of visual space motion;
s1 and S2 … Sn which indicate the path length of the completion of the spatial movement of each test;
t1, T2 and Tn, which represent the completion time of the spatial motion control of each test;
n represents the number of tests;
the auditory space motion operation approximation calculation method comprises the following formula:
b ═ C (C1/D1+ C2/D2+ … + Cn/Dn)/n, where
B, representing the auditory space motion operation approximation degree;
c1 and C2 … Cn which indicate the actual motion path length of the space of each test;
d1 and D2 … Dn indicate the instruction path length of each test.
6. The virtual reality visual-auditory pathway-based spatial perception imbalance training system according to claim 5, wherein in step S32, the spatial perception synthetic quotient calculation method includes the following formula:
e is Ax + By, wherein
E, representing the spatial perception comprehensive quotient of the user;
x, weight representing the approximation of the visual space motion operation;
y, weight representing the auditory spatial motion manipulation approximation;
in step S33, the normalized quotient is obtained by the following equation:
f ═ 100+ (E-G), where,
f, representing the standardized quotient of the user;
and G, representing a reference spatial perception synthetic quotient value in the reference data.
7. The virtual reality visual-auditory pathway-based spatial perception imbalance training system according to claim 6, further comprising a spatial perception training scheme generation module (11) and a spatial perception training process control module (12) connected to the main control module (2), wherein,
the spatial perception training scheme generating module (11) is used for providing training tasks of corresponding grades for a user according to the analysis result of the spatial perception data analyzing module (9);
and the spatial perception training process control module (12) is used for realizing the storage of a training scheme of a user, the recording of the development condition of the scheme and the recording of historical performances of the finished training.
8. The virtual reality visual-auditory pathway-based spatial perceptual misregistration training system of claim 7, further comprising the training steps of:
A. providing training tasks of corresponding grades for a user according to the analysis result of the spatial perception data analysis module (9);
B. step S2 is performed in a training manner under the corresponding training task.
9. The system for spatial perception auditory pathway-based spatial perception imbalance measurement and training according to claim 8, wherein the foot spatial position acquisition module (4) and the hand spatial position acquisition module (3) each comprise a wearable ring and a mounting box fixed on the wearable ring, and the mounting box is provided with a six-axis/nine-axis sensor, and the six-axis/nine-axis sensor is connected to the main control module (2) through a wired or/wireless mode.
10. The virtual reality visual-auditory pathway-based spatial perception disorder training system according to claim 9, wherein the visual-auditory task presentation module (6) comprises a virtual reality headset module (61) and a high-fidelity headset (62) respectively connected to the main control module (2), and the high-fidelity headset (62) is integrated on the virtual reality headset module (61).
CN201910944581.7A 2019-09-30 2019-09-30 Spatial perception detuning training system based on virtual reality visual and auditory pathway Active CN110786825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910944581.7A CN110786825B (en) 2019-09-30 2019-09-30 Spatial perception detuning training system based on virtual reality visual and auditory pathway

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910944581.7A CN110786825B (en) 2019-09-30 2019-09-30 Spatial perception detuning training system based on virtual reality visual and auditory pathway

Publications (2)

Publication Number Publication Date
CN110786825A true CN110786825A (en) 2020-02-14
CN110786825B CN110786825B (en) 2022-06-21

Family

ID=69440058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910944581.7A Active CN110786825B (en) 2019-09-30 2019-09-30 Spatial perception detuning training system based on virtual reality visual and auditory pathway

Country Status (1)

Country Link
CN (1) CN110786825B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114748039A (en) * 2022-04-15 2022-07-15 中国民航大学 Spatial depth perception test system and method
WO2023240951A1 (en) * 2022-06-13 2023-12-21 深圳先进技术研究院 Training method, training apparatus, training device, and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140336539A1 (en) * 2011-11-11 2014-11-13 Rutgers, The State University Of New Jersey Methods for the Diagnosis and Treatment of Neurological Disorders
CN105496418A (en) * 2016-01-08 2016-04-20 中国科学技术大学 Arm-belt-type wearable system for evaluating upper limb movement function
CN107519622A (en) * 2017-08-21 2017-12-29 南通大学 Spatial cognition rehabilitation training system and method based on virtual reality and the dynamic tracking of eye
CN108433721A (en) * 2018-01-30 2018-08-24 浙江凡聚科技有限公司 The training method and system of brain function network detection and regulation and control based on virtual reality
CN108764204A (en) * 2018-06-06 2018-11-06 姜涵予 A kind of method and device of evaluation and test consciousness state
CN109350907A (en) * 2018-09-30 2019-02-19 浙江凡聚科技有限公司 The mostly dynamic obstacle of child attention defect based on virtual reality surveys method for training and system
CN109716444A (en) * 2016-09-28 2019-05-03 Bodbox股份有限公司 The assessment and guidance of athletic performance
CN109753868A (en) * 2018-11-14 2019-05-14 深圳卡路里科技有限公司 Appraisal procedure and device, the Intelligent bracelet of athletic performance
CN110232963A (en) * 2019-05-06 2019-09-13 中山大学附属第一医院 A kind of upper extremity exercise functional assessment system and method based on stereo display technique

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140336539A1 (en) * 2011-11-11 2014-11-13 Rutgers, The State University Of New Jersey Methods for the Diagnosis and Treatment of Neurological Disorders
CN105496418A (en) * 2016-01-08 2016-04-20 中国科学技术大学 Arm-belt-type wearable system for evaluating upper limb movement function
CN109716444A (en) * 2016-09-28 2019-05-03 Bodbox股份有限公司 The assessment and guidance of athletic performance
CN107519622A (en) * 2017-08-21 2017-12-29 南通大学 Spatial cognition rehabilitation training system and method based on virtual reality and the dynamic tracking of eye
CN108433721A (en) * 2018-01-30 2018-08-24 浙江凡聚科技有限公司 The training method and system of brain function network detection and regulation and control based on virtual reality
CN108764204A (en) * 2018-06-06 2018-11-06 姜涵予 A kind of method and device of evaluation and test consciousness state
CN109350907A (en) * 2018-09-30 2019-02-19 浙江凡聚科技有限公司 The mostly dynamic obstacle of child attention defect based on virtual reality surveys method for training and system
CN109753868A (en) * 2018-11-14 2019-05-14 深圳卡路里科技有限公司 Appraisal procedure and device, the Intelligent bracelet of athletic performance
CN110232963A (en) * 2019-05-06 2019-09-13 中山大学附属第一医院 A kind of upper extremity exercise functional assessment system and method based on stereo display technique

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114748039A (en) * 2022-04-15 2022-07-15 中国民航大学 Spatial depth perception test system and method
WO2023240951A1 (en) * 2022-06-13 2023-12-21 深圳先进技术研究院 Training method, training apparatus, training device, and storage medium

Also Published As

Publication number Publication date
CN110786825B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
CN109620185B (en) Autism auxiliary diagnosis system, device and medium based on multi-modal information
US20240045470A1 (en) System and method for enhanced training using a virtual reality environment and bio-signal data
CN108463271B (en) System and method for motor skill analysis and skill enhancement and prompting
US10568502B2 (en) Visual disability detection system using virtual reality
CN204952205U (en) Wear -type combination body -building system
CN110680314B (en) Virtual reality situation task attention training system based on brain electricity multi-parameter
KR20130098770A (en) Expanded 3d space based virtual sports simulation system
US20080050711A1 (en) Modulating Computer System Useful for Enhancing Learning
CN105879390A (en) Method and device for processing virtual reality game
CN110786825B (en) Spatial perception detuning training system based on virtual reality visual and auditory pathway
WO2019210087A1 (en) Methods, systems, and computer readable media for testing visual function using virtual mobility tests
CN111477055A (en) Virtual reality technology-based teacher training system and method
Lee et al. ADHD assessment and testing system design based on virtual reality
US11120631B2 (en) Cognitive training system
Ugulino et al. Landmark identification with wearables for supporting spatial awareness by blind persons
CN215875885U (en) Immersion type anti-stress psychological training system based on VR technology
CN106923785A (en) Vision screening system based on virtual reality technology
CN110721431B (en) Sensory integration detuning testing and training device and system based on visual and auditory pathways
CN212816265U (en) Virtual reality technology-based air force flying psychology selection device
KR20120097098A (en) Ubiquitous-learning study guiding device for improving study efficiency based on study emotion index generated from bio-signal emotion index and context information
KR20170140756A (en) Appratus for writing motion-script, appratus for self-learning montion and method for using the same
Gogia et al. Multi-modal affect detection for learning applications
CN110808091B (en) Sensory integration maladjustment training system based on virtual reality visual-audio sense path
Hu et al. Application of intelligent football training system based on IoT optical imaging and sensor data monitoring
Leeb et al. Combining BCI and virtual reality: scouting virtual worlds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Qin Lu

Inventor after: Wang Suogang

Inventor after: Li Weikuan

Inventor after: Liu Luoxi

Inventor after: Zhang Zhongyang

Inventor before: Qin Lu

Inventor before: Wang Suogang

Inventor before: Li Weikuan

Inventor before: Liu Luoxi

Inventor before: Zhang Zhongyang

GR01 Patent grant
GR01 Patent grant