[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2023240951A1 - Training method, training apparatus, training device, and storage medium - Google Patents

Training method, training apparatus, training device, and storage medium Download PDF

Info

Publication number
WO2023240951A1
WO2023240951A1 PCT/CN2022/138186 CN2022138186W WO2023240951A1 WO 2023240951 A1 WO2023240951 A1 WO 2023240951A1 CN 2022138186 W CN2022138186 W CN 2022138186W WO 2023240951 A1 WO2023240951 A1 WO 2023240951A1
Authority
WO
WIPO (PCT)
Prior art keywords
decision
visual
training
task
making
Prior art date
Application number
PCT/CN2022/138186
Other languages
French (fr)
Chinese (zh)
Inventor
张志林
李胜楠
杨伟平
梁栋
吴景龙
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2023240951A1 publication Critical patent/WO2023240951A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/01Assessment or evaluation of speech recognition systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Definitions

  • the present application relates to the field of computer technology, and in particular, to a training method, device, equipment and storage medium.
  • AD Alzheimer's disease
  • Clinical manifestations include comprehensive dementia such as memory impairment, aphasia, apraxia, agnosia, impairment of visuospatial skills, executive dysfunction, and changes in personality and behavior.
  • Alzheimer's disease will reduce their perceptual decision-making ability, causing them to perform poorly on decision-making tasks that require the investment of perceptual abilities, attention resources, memory and other high-level cognitive abilities.
  • this application provides training methods, training devices, training equipment and storage media, which can effectively improve an individual's perceptual decision-making ability.
  • this application provides a training method, including: randomly displaying a perceptual decision-making task.
  • the perceptual decision-making task includes a visual classification task, an auditory classification task, and a visual and auditory classification task.
  • the visual classification task includes classifying M first pictures respectively.
  • the auditory classification task includes classifying N first sounds respectively
  • the audio-visual classification task includes classifying L audio-visual stimulus pairs, each of the audio-visual stimulus pairs includes a second picture and a target in the second picture The corresponding second sound, where M ⁇ 2, N ⁇ 2, L ⁇ 2; collect the behavioral response data generated by the user when completing the perceptual decision-making task; determine the training results based on the behavioral response data, and the training results include the user The accuracy of completing this perceptual decision-making task.
  • the behavioral response data includes classification results corresponding to each classification task in the perceptual decision-making task and reaction times for completing each classification task.
  • the training method also includes: inputting the classification results and reaction times into a preset drift Process in the diffusion model to obtain the drift rate, decision boundary and non-decision time; based on the drift rate, decision boundary and non-decision time, the user's perceptual decision-making ability is evaluated.
  • the training method also includes: determining the user's health status based on the user's perceptual decision-making ability.
  • the training method also includes: obtaining M preset pictures; adjusting the basic attributes of each preset picture to obtain M first pictures; and constructing the visual classification task based on the M first pictures. .
  • the training method further includes: obtaining N preset sounds; adjusting the sound attributes of each preset sound to obtain N first sounds; and constructing an auditory classification task based on the N first sounds.
  • the training method further includes: determining L second pictures among the M first pictures; determining L second sounds among the N first sounds; comparing the L second pictures and L Pair the second sounds to obtain L audio-visual stimulus pairs; construct an audio-visual classification task based on the L audio-visual stimulus pairs.
  • the training method further includes: determining the stimulation intensity corresponding to each of the first pictures and each of the first sounds, and the stimulation intensity is used to reflect each of the first pictures. and the corresponding accuracy rate when each first sound is classified;
  • a perceptual decision-making task of the second stimulation intensity is constructed.
  • this application provides a training device, including:
  • the display unit is used to randomly display perceptual decision-making tasks.
  • the perceptual decision-making tasks include visual classification tasks, auditory classification tasks and visual and auditory classification tasks.
  • the visual classification tasks include classifying the M first pictures respectively, and the auditory classification tasks include classifying the N first pictures. Sounds are classified separately.
  • the audio-visual classification task includes classifying L audio-visual stimulus pairs respectively. Each audio-visual stimulus pair includes a second picture and a second sound corresponding to the target in the second picture, where M ⁇ 2, N ⁇ 2 ,L ⁇ 2;
  • the collection unit is used to collect behavioral response data generated by users when completing perceptual decision-making tasks
  • the determination unit is used to determine the training results based on the behavioral response data, and the training results include the accuracy of the user completing the perceptual decision-making task.
  • the present application provides a training device, including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • a training device including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program, any one of the above aspects in the first aspect is implemented.
  • the present application provides a computer-readable storage medium that stores a computer program.
  • the computer program is executed by a processor, the training method described in any of the above-mentioned first aspects is implemented.
  • embodiments of the present application provide a computer program product.
  • the computer program product When the computer program product is run on a processor, it causes the processor to execute the training method described in any of the above-mentioned first aspects.
  • the training method provided by this application randomly displays perceptual decision-making tasks to users, and trains users based on the perceptual decision-making tasks.
  • the behavioral response data generated by the user when completing the perceptual decision-making task is collected.
  • the training results can be determined, such as determining the accuracy of the user in completing the perceptual decision-making task.
  • this perceptual decision-making task includes classification tasks on multiple channels of vision, hearing, and visual and auditory
  • using this perceptual decision-making task to train users can accelerate the user's information storage and encoding in the high-order cognitive process, and can improve The user's reaction speed, in turn, promotes the formation of perceptual decision-making, thereby effectively improving the individual's perceptual decision-making ability.
  • Figure 1 is a schematic flow chart of a training method provided by an exemplary embodiment of the present application.
  • Figure 2 is a first schematic diagram provided by an embodiment of the present application.
  • Figure 3 is a first sound diagram provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of an audio-visual stimulus pair provided by an embodiment of the present application.
  • Figure 5 is a specific flow chart of a training method according to another exemplary embodiment of the present application.
  • Figure 6 is a specific flow chart of a training method according to yet another exemplary embodiment of the present application.
  • Figure 7 is a schematic diagram of a training device provided by an embodiment of the present application.
  • Figure 8 is a schematic diagram of a training device provided by another embodiment of the present application.
  • AD Alzheimer's disease
  • Clinical manifestations include comprehensive dementia such as memory impairment, aphasia, apraxia, agnosia, impairment of visuospatial skills, executive dysfunction, and changes in personality and behavior.
  • Perceptual decision-making is a continuous hierarchical cognitive operation that converts sensory information into goal-oriented and responds, including encoding and accumulation of decision-making from sensory information (such as information generated by objective things that directly act on sensory organs) Information is applied to make decisions using decision rules, culminating in behavioral responses. For example, the user sees a picture, determines that the content in the picture is an animal, and selects the animal option among the preset options. This entire process is called perceptual decision-making.
  • Alzheimer's disease will reduce their perceptual decision-making ability, causing them to perform poorly on decision-making tasks that require the investment of perceptual abilities, attention resources, memory and other high-level cognitive abilities.
  • this application provides a training method, training device, training equipment and storage medium.
  • the users are trained based on the perceptual decision-making tasks.
  • the behavioral response data generated by the user when completing the perceptual decision-making task is collected.
  • the training results can be determined, such as determining the accuracy of the user in completing the perceptual decision-making task.
  • this perceptual decision-making task includes classification tasks on multiple channels of vision, hearing, and visual and auditory
  • using this perceptual decision-making task to train users can accelerate the user's information storage and encoding in the high-order cognitive process, and can improve The user's reaction speed, in turn, promotes the formation of perceptual decision-making, thereby effectively improving the individual's perceptual decision-making ability.
  • the embodiment of this application provides training software.
  • the training software can be installed in a training device, which can be a device that can display pictures and have audio playback functions, such as smartphones, tablets, desktop computers, laptops, robots, smart wearables and other devices.
  • the training software provided by this application can not only train users, but also test the user's perceptual decision-making ability before or after training.
  • Figure 1 is a schematic flow chart of a training method provided by an exemplary embodiment of the present application.
  • the training method as shown in Figure 1 may include: S101 ⁇ S103, specifically as follows:
  • Perceptual decision-making tasks include visual classification tasks, auditory classification tasks, and visual and auditory classification tasks.
  • the visual classification task includes classifying M first images respectively, M ⁇ 2.
  • M represents the number of the first picture.
  • M can be a positive integer greater than or equal to 2.
  • the first picture may be a picture containing any object.
  • the first picture may be a picture containing faces, a picture containing cars, a picture containing animals, a picture containing plants, a picture containing buildings, a picture containing food, or containing daily necessities. of images, images containing electronic devices, images containing musical instruments, etc. Different types of first pictures can be added according to actual training needs. This is only an illustrative description without limitation.
  • Figure 2 is a first schematic diagram provided by an embodiment of the present application. As shown in Figure 2, Figure 2 shows a first picture in the visual classification task, which is a picture containing a face.
  • the first picture may be obtained by taking a photo, may be collected from the Internet, may be obtained by painting, etc.
  • the auditory classification task involves classifying N first sounds respectively, N ⁇ 2.
  • N represents the number of first sounds, for example, N can be a positive integer greater than or equal to 2.
  • the first sound may be audio containing any sound.
  • the first sound may be audio containing the sound of a person, audio containing the sound of a car, audio containing the sound of an animal, audio containing the sound of an electronic device, audio containing the sound of an instrument, etc.
  • Different types of first sounds can be added according to actual training needs. This is only an exemplary description and is not limited.
  • Figure 3 is a first sound diagram provided by an embodiment of the present application. As shown in Figure 3, Figure 3 shows a first sound in the auditory classification task.
  • the first sound is an audio containing a character's voice, specifically an audio containing a little girl's voice.
  • the first sound may be obtained by recording, or may be collected from the Internet, etc.
  • the audio-visual classification task includes classifying L audio-visual stimulus pairs respectively, each audio-visual stimulus pair includes a second picture and a second sound corresponding to the target in the second picture, L ⁇ 2.
  • L represents the number of audio-visual stimulus pairs.
  • L can be a positive integer greater than or equal to 2.
  • the second picture may be a picture containing any object.
  • the second picture may be a picture containing faces, a picture containing cars, a picture containing animals, a picture containing musical instruments, etc.
  • the second sound may be audio containing human voices, audio containing car sounds, audio containing animal sounds, audio containing musical instrument sounds, etc.
  • a picture of a face and the audio of the sound corresponding to the face form an audio-visual stimulus pair
  • the picture of a car and the audio of the sound corresponding to the car form an audio-visual stimulus pair
  • the picture of an animal and the sound corresponding to the animal form an audio-visual stimulus pair.
  • the audio of the instrument forms an audio-visual stimulus pair
  • the audio containing the picture of the musical instrument and the audio corresponding to the sound of the instrument forms an audio-visual stimulus pair.
  • Figure 4 is a schematic diagram of an audio-visual stimulus pair provided by an embodiment of the present application.
  • Figure 4 shows an audio-visual stimulus pair in the audio-visual classification task.
  • the audio-visual stimulus pair includes a second picture and a second sound corresponding to the target in the second picture.
  • the second picture is a picture containing a car
  • the second sound is an audio containing a car sound, specifically an audio containing a car horn.
  • the second picture in the audio-visual stimulus pair can be selected from the first picture, or it can be re-photographed, or it can be collected from the Internet, or it can be obtained through painting.
  • the second sound in the audio-visual stimulus pair can be selected from the first sound, can be recorded, or can be collected from the Internet.
  • the perceptual decision-making task starts to be displayed randomly in the display interface of the training device.
  • users can click manually, operate remotely, or use voice control.
  • a gaze point is presented in the center of the display interface of the training device.
  • the presentation duration of the gaze point can be set by oneself, for example, it can be set to 2000 ms, and then the visual classification task, the auditory classification task, and the visual and auditory classification tasks are randomly displayed.
  • One way to display it can be to display one task and then another task until all tasks are displayed.
  • the visual classification task is shown first. After all the M first pictures in the visual classification task are shown, the auditory classification task is shown. After the N first sounds in the auditory classification task are shown, the visual and auditory tasks are shown again. Classification task until all L audio-visual stimulus pairs in the audio-visual classification task are displayed.
  • the display order can be a visual classification task, an auditory classification task, a visual and auditory classification task, or it can be a visual classification task, a visual and auditory classification task, an auditory classification task, or it can be a visual and auditory classification task, a visual classification task, an auditory classification task. Wait, there is no limit to this.
  • another display method may be to intersperse the visual classification task, the auditory classification task, and the audio-visual classification task, that is, the M first pictures, the N first sounds, and the L audio-visual stimulus pairs are interspersed until all The task is displayed. For example, first display several first pictures, then display several audio-visual stimulus pairs, then display several first sounds, then display several first pictures, then display several sounds, and so on until all tasks are displayed.
  • first display a first picture then display a first sound
  • display an audio-visual stimulus pair then display a first sound
  • display a first picture and so on.
  • S102 Collect the user's behavioral response data when completing the perceptual decision-making task.
  • the display interface of the training device in addition to displaying each perceptual decision-making task, options corresponding to each perceptual decision-making task are also displayed.
  • the user makes a choice for each classification task.
  • the data generated by these selection operations are behavioral response data, and the training device collects these behavioral response data.
  • the first picture in the visual classification task is displayed in the current display interface, and two options are displayed side by side below, above, right, or left of the first picture.
  • the correct choice is to click the left option among the two options displayed side by side;
  • the first picture is a picture containing a car, the correct choice is to click the right option among the two options displayed side by side. options.
  • the current training device plays the first sound in the auditory classification task, and the display interface displays two options side by side.
  • the first sound is a human voice
  • the correct choice is to click the left option to display the two options side by side
  • the first sound is the sound of a car
  • the correct choice is to click the right option to display the two options side by side.
  • the distance between the user and the training equipment can be set and adjusted by himself. For example, the user is 60 cm away from the display interface and speakers.
  • the training device records the choices made by the user for each classification task.
  • the user makes a selection operation for each classification task, which can be achieved through a mouse.
  • the correct choice for these classification tasks is to click the left mouse button. key.
  • the correct choice for these classification tasks is to click the right mouse button.
  • the training device plays the second sound corresponding to the second picture.
  • the second picture is a second picture containing a face
  • the second sound is a second sound containing a character's voice
  • the correct selection is to click the left mouse button.
  • the training device records the choices made by the user for each classification task.
  • two adjacent classification tasks can be displayed based on a preset time interval, and the display duration of each classification task is is the default duration.
  • the preset time interval between two adjacent classification tasks can be 1200 to 1500ms, and the display duration of each classification task can be 300ms.
  • the preset time interval between two adjacent pairs of audio-visual stimuli can be 1200-1500 ms, and the display duration of each pair of audio-visual stimuli can be 300-500 ms. This is only an illustrative description without limitation.
  • the training results include the accuracy of users completing perceptual decision-making tasks.
  • the behavioral response data is data generated by the user making a selection operation for each classification task.
  • the behavioral response data corresponding to each classification task is compared with the correct choice corresponding to the task, and the training result is determined based on the comparison result.
  • one point is scored for each correct choice, and no points are scored for no choice or wrong choice.
  • a score can be obtained based on the user's behavioral response data, and the proportion of this score to the total score (the corresponding score when all tasks are selected correctly) is calculated, and we get The accuracy of users completing perceptual decision-making tasks. This is only an illustrative description without limitation.
  • the perceptual decision-making task is randomly displayed to the user, and the user is trained based on the perceptual decision-making task.
  • the behavioral response data generated by the user when completing the perceptual decision-making task is collected. Based on the behavioral response data, the training results can be determined, such as determining the accuracy of the user in completing the perceptual decision-making task.
  • this perceptual decision-making task includes classification tasks on multiple channels of vision, hearing, and visual and auditory
  • using this perceptual decision-making task to train users can accelerate the user's information storage and encoding in the high-order cognitive process, and can improve The user's reaction speed, in turn, promotes the formation of perceptual decision-making, thereby effectively improving the individual's perceptual decision-making ability.
  • Figure 5 is a specific flow chart of a training method according to another exemplary embodiment of the present application.
  • the training method shown in Figure 5 may include: S201 ⁇ S205, specifically as follows:
  • S202 Collect the user's behavioral response data when completing the perceptual decision-making task.
  • S203 Determine training results based on behavioral response data.
  • S201 ⁇ S203 are exactly the same as S101 ⁇ S103 in the embodiment corresponding to FIG. 1 .
  • S101 ⁇ S103 in the embodiment corresponding to FIG. 1 , which will not be described again here.
  • Behavioral response data include the classification results corresponding to each classification task in the perceptual decision-making task and the reaction time to complete each classification task.
  • the classification results corresponding to each classification task are the selection operations made by the user. For example, the user clicks on the left option of two options displayed side by side, the user clicks on the right option of two options displayed side by side, the user clicks the left button of the mouse, and the user clicks the right button of the mouse.
  • the reaction time to complete each category task was determined by the time when each category task was initially presented and the time when the user made a selection. For example, for a certain classification task, the timing starts when the classification task is displayed and ends immediately after the user makes a selection. The recorded time is the reaction time corresponding to the classification task.
  • S204 Input the classification results and reaction time into the preset drift diffusion model for processing, and obtain the drift rate, decision boundary and non-decision time.
  • the preset drift-diffusion model simulates the decision-making process in the classification task.
  • Each choice of the user is represented as an upper boundary and a lower boundary.
  • the perceptual decision-making process continues to accumulate evidence over time until it reaches one of the two boundaries. , and then trigger corresponding behavioral responses.
  • the drift rate, decision boundary and non-decision time are different parameters obtained by processing the classification results and reaction time of the drift diffusion model. These different parameters respectively map the cognitive processing behind the behavior of the perceptual decision-making process. Specifically, the drift rate is used to describe the speed at which information is accumulated, the decision boundary is used to describe the response boundary that needs to be reached before a response is made, and the non-decision time is used to describe the time of sensory encoding and motor response.
  • the specific parameters of the drift-diffusion model can be calculated under different situations to reflect the user's potential cognitive process in the cross-channel perceptual decision-making process, thereby determining the user's training effect.
  • the standard deviation can also be calculated based on all remaining reaction times, and then eliminating data whose reaction time exceeds the preset standard deviation range. For example, data whose reaction times exceed plus or minus 2.5 standard deviations are eliminated. This is only an illustrative description without limitation.
  • f(t) is the conditional probability distribution about t.
  • the function can be It is split into two parts: f(t) prior and f(t) likelihood.
  • the prior refers to the user's subjective guess of the probability distribution without knowing the parameters of the drift-diffusion model, while the likelihood refers to the calculated parameters of the drift-diffusion model when the probability distribution of the behavioral response data is obtained.
  • the focus of the drift-diffusion model is to find the parameter values under likelihood conditions. Due to the complexity of the formula, the parameter values cannot be obtained directly, so the Markov chain Monte Carlo algorithm (Markov chain Monte Carlo algorithm) needs to be used. Chain Monte Carlo, MCMC).
  • the MCMC algorithm can obtain function characteristics through continuous sampling, thereby inferring the parameters of the population through samples. Therefore, the likelihood part in Bayesian is calculated through the MCMC algorithm to estimate the parameter distribution.
  • the HDDM toolbox of a computer programming language can be used, which provides hierarchical Bayesian parameter estimation of the drift-diffusion model, allowing the drift-diffusion model parameters of each subject to be estimated simultaneously, thereby obtaining the drift rate, decision boundaries and non-decision time.
  • the classification results and reaction time into the preset drift diffusion model for processing.
  • the relative starting point, the inter-training variation of the relative starting point, and the drift rate can also be obtained. Parameters such as inter-training variation and inter-training variation of non-decision time.
  • the relative starting point is used to describe the starting preference for response selection.
  • Inter-session variation in relative starting points is expressed as a uniformly distributed range of mean relative starting points, describing the distribution of actual starting points for a particular training session.
  • the inter-training variation in drift rate is expressed as the standard deviation of a normal distribution, and the mean is the drift rate, which is used to describe the actual drift rate distribution for a specific training.
  • Inter-training variation in non-decision time is represented by a uniformly distributed range of mean non-decision time and is used to describe the distribution of actual non-decision time in training.
  • S205 Evaluate the user's perceptual decision-making ability based on the drift rate, decision boundary and non-decision time.
  • parameters such as drift rate, decision boundary, and non-decision time each correspond to different indicator ranges.
  • the indicator range corresponding to the drift rate can be greater than -5 and less than 5
  • the indicator range corresponding to the decision boundary can be greater than 0.5 and less than 2
  • the indicator range corresponding to the non-decision time can be greater than 0.1 and less than 0.5.
  • the user's perceptual decision-making ability is evaluated to be strong. If two of the user's drift rate, decision boundary, and non-decision time are within their respective corresponding indicator ranges, the user's perceived decision-making ability is assessed to be moderate. If any of the user's drift rate, decision boundary, and non-decision time is within the corresponding indicator range, or if the user's drift rate, decision boundary, and non-decision time are not within the corresponding indicator range, the user's perceived decision-making ability is assessed to be poor. . This is only an illustrative description without limitation.
  • the user's classification results and reaction time are processed through a preset drift diffusion model to obtain the drift rate, decision boundary and non-decision time.
  • the drift rate, decision boundary and non-decision time Through parameters such as drift rate, decision boundary and non-decision time, the user's potential cognitive process in the process of cross-channel perception decision-making can be accurately reflected. Then the drift rate, decision boundary and non-decision time can be analyzed accurately to accurately evaluate The user’s perceptual decision-making ability.
  • Figure 6 is a specific flow chart of a training method according to yet another exemplary embodiment of the present application.
  • the training method shown in Figure 6 may include: S301 ⁇ S306. It is worth noting that S301 ⁇ S305 in this embodiment are exactly the same as S201 ⁇ S205 in the embodiment corresponding to Figure 5. For details, refer to the description of S201 ⁇ S205 in the embodiment corresponding to Figure 5. There are no differences in this embodiment. Again. S306 details are as follows:
  • S306 Determine the user's health status based on the user's perceptual decision-making ability.
  • some conditions can reduce a user's perceptual decision-making ability.
  • Alzheimer's disease can reduce their perceptual decision-making abilities.
  • the health state of the user in this training is determined to be an unhealthy state.
  • the user of this training is a patient with Alzheimer's disease. This is only an illustrative description without limitation.
  • the user's health status can be accurately determined. For example, it is beneficial to accurately and timely detect Alzheimer's disease patients so that Alzheimer's disease patients can be treated as early as possible.
  • the training method provided by this application may also include: obtaining M preset pictures; adjusting the basic attributes of each preset picture to obtain M first pictures; construct a visual classification task based on M first pictures.
  • Basic attributes can include the spatial frequency, contrast, brightness, pixels, size, clarity, format, etc. of the image. For example, get several preset images, half of which contain faces and the other half of which contain cars. Adjust the spatial frequency, contrast, brightness and pixels of these images to be consistent. For example, you can adjust the pixels to 670 ⁇ 670 pixels.
  • the clarity of each picture is adjusted to 8 different levels of 30%, 32.5%, 35%, 37.5%, 40%, 42.5%, 45%, and 50% through the signal-to-noise ratio.
  • M first pictures are obtained, for example, 240 first pictures are obtained.
  • the correct option for each first picture is to click the option on the left of the two options to be displayed side by side, or the first picture corresponds to The correct choice is to click the right option of the two options displayed side by side, or the correct choice corresponding to the first picture is to click the left mouse button, or the correct choice corresponding to the first picture is to click the right mouse button, etc.
  • a visual classification task is constructed.
  • each first picture obtained has its basic attributes adjusted, which effectively avoids training bias caused by differences in the basic attributes of each picture and ensures that the basic attributes of each picture will not affect the user's choice. , thus improving the accuracy of training results.
  • the training method provided by this application may also include: obtaining N preset sounds; adjusting the sound attributes of each preset sound to obtain N First sounds; construct an auditory classification task based on N first sounds.
  • the preset sound refers to the original first sound.
  • Sound attributes can include frequency, pitch, loudness, timbre, etc. of the sound. For example, get several preset sounds, half of which are human sounds and the other half are car sounds. Adjust the loudness and frequency of these sounds to be consistent.
  • preset software such as Matlab software
  • the loudness and frequency of the processed sounds are adjusted to be consistent.
  • speech synthesis software uses speech synthesis software to embed these processed sounds into white noise of different loudness to obtain the first sounds with different signal-to-noise ratios.
  • the loudness of the processed sound can be reduced to 50%, and speech synthesis software can be used to embed the loudness-adjusted sounds into eight white noises of different loudnesses, resulting in signal-to-noise ratios of 12.5% and 25 respectively. %, 37.5%, 50%, 62.5%, 75%, 87.5% and 100% of multiple first voices.
  • the loudness of these first sounds is consistent, for example, the loudness of these first sounds is 60dB.
  • Adjust the sharpness of each picture via signal-to-noise ratio For example, the clarity of each picture is adjusted to 8 different levels of 30%, 32.5%, 35%, 37.5%, 40%, 42.5%, 45%, and 50% through the signal-to-noise ratio.
  • N first sounds are obtained, for example, 240 first sounds are obtained.
  • the correct option is set for each first sound.
  • the correct choice for the first sound is to click on the left option of the two options to display side by side, or for the first sound to The correct choice is to click the right option of the two options displayed side by side, or the correct choice for the first sound is to click the left mouse button, or the correct choice for the first sound is to click the right mouse button, etc.
  • an auditory classification task is constructed.
  • each first sound obtained has adjusted sound attributes, which effectively avoids training deviations caused by differences in the attributes of each sound and ensures that the sound attributes of each sound will not affect the user's choice. This improves the accuracy of training results.
  • the training method provided by this application may also include: determining L second pictures among the M first pictures; Determine L second sounds among the sounds; pair L second pictures and L second sounds to obtain L audio-visual stimulus pairs; construct an audio-visual classification task based on the L audio-visual stimulus pairs.
  • the second picture in the audio-visual stimulus pair may be selected from the first picture, and the second sound in the audio-visual stimulus pair may be selected from the first sound.
  • L first pictures are selected from M first pictures, and these L first pictures are determined as L second pictures. Select L first sounds from the N first sounds, and determine these L first sounds as L second sounds.
  • the second sound is the sound corresponding to the target in the second picture
  • select the sound corresponding to the target in the second picture. picture For example, a certain first picture is selected as a picture containing a car among M first pictures, and the sound corresponding to the car happens to be among the N first sounds.
  • the selected first picture is determined as the second picture, and the N first pictures are selected as the second pictures.
  • the sound corresponding to the first sound of the car is determined as the second sound.
  • each audio-visual stimulus pair set the correct option for each audio-visual stimulus pair.
  • the correct choice for the audio-visual stimulus pair is to click the left option of the two options to display side by side, or the correct choice for the audio-visual stimulus pair is to click to display side by side.
  • the right option among the two options, or the correct choice of the audio-visual stimulus pair is to click the left mouse button, or the correct choice of the audio-visual stimulus pair is to click the right mouse button, etc.
  • an audio-visual classification task is constructed.
  • the second picture and the second sound in each audio-visual stimulus pair are selected from the first picture and the first sound. Since the basic attributes of the first picture and the sound attributes of the first sound have been adjusted, it is equivalent to the basic attributes of the second picture and the sound attributes of the second sound in each audio-visual stimulus pair being adjusted. It effectively avoids training bias caused by differences in the basic attributes and sound attributes of each picture, ensuring that the basic attributes and sound attributes of each picture will not affect the user's choice, thus improving the accuracy of the training results.
  • pre-training may also be included before formal training.
  • the training method provided by this application may also include: S401 ⁇ S405.
  • S401 Determine the stimulation intensity corresponding to each first picture and each first sound.
  • S402 Select the picture whose stimulation intensity is the first stimulation intensity and the picture whose stimulation intensity is the second stimulation intensity among the M first pictures.
  • S403 Select the sound whose stimulation intensity is the first stimulation intensity and the sound whose stimulation intensity is the second stimulation intensity from the N first sounds.
  • the stimulus intensity is used to reflect the corresponding accuracy of each first picture and each first sound when they are classified.
  • M first pictures are displayed in the display interface of the training device, the user makes a selection operation on each first picture, and the selection operation corresponding to each first picture is performed with the correct selection corresponding to the first picture. Compare. One point is scored for each correct choice. No points are scored for no selection or wrong selection. A score can be obtained based on all the selection operations of the user this time, and the proportion of this score to the total score (the corresponding score when all the first pictures are selected correctly) is calculated. , to obtain the accuracy of the user's pre-training.
  • the stimulation intensity for the user to select the correct first picture is determined.
  • the accuracy rate is the first threshold
  • the user selects the stimulation intensity of the correct first picture as the first stimulation intensity
  • the accuracy rate is the second threshold
  • the user selects the stimulation intensity of the correct first picture as the second stimulation intensity.
  • the first threshold is greater than the second threshold, and the first stimulation intensity is higher than the second stimulation intensity.
  • the first threshold is 90%
  • the second threshold is 70%
  • the first stimulation intensity is high intensity
  • the second stimulation intensity is low intensity.
  • the accuracy rate of this pre-training is 90%, and this time the user selects the correct stimulation intensity of the first picture as the first stimulation intensity, that is, high intensity.
  • the accuracy rate of this pre-training is 70%, and this time the user selects the correct stimulation intensity of the first picture as the second stimulation intensity, that is, low intensity.
  • N first sounds are displayed in the display interface of the training device, the user makes a selection operation for each first sound, and the selection operation corresponding to each first sound is performed with the correct selection corresponding to the first sound. Compare. One point is scored for each correct choice. No points are scored for no choice or wrong choice. A score can be obtained based on all the selection operations of the user this time, and the proportion of this score to the total score (the corresponding score when all first voice selections are correct) is calculated. , to obtain the accuracy of the user's pre-training.
  • the stimulation intensity of the first sound selected correctly by the user is determined.
  • the accuracy rate is the first threshold
  • the user selects the correct stimulation intensity of the first sound as the first stimulation intensity
  • the accuracy rate is the second threshold
  • the user selects the correct stimulation intensity of the first sound as the second stimulation intensity.
  • the first threshold is greater than the second threshold, and the first stimulation intensity is higher than the second stimulation intensity.
  • the first threshold is 90%
  • the second threshold is 70%
  • the first stimulation intensity is high intensity
  • the second stimulation intensity is low intensity.
  • the accuracy rate of this pre-training is 90%, and this time the user selects the correct stimulation intensity of the first sound as the first stimulation intensity, that is, high intensity.
  • the accuracy rate of this pre-training is 70%, and this time the user selects the correct stimulation intensity of the first sound as the second stimulation intensity, that is, low intensity.
  • S404 Construct a perceptual decision-making task of the first stimulus intensity based on the picture of the first stimulus intensity and the sound of the first stimulus intensity.
  • the perceptual decision-making tasks of the first stimulus intensity include the visual classification task of the first stimulus intensity, the auditory classification task of the first stimulus intensity, and the visual and auditory classification task of the first stimulus intensity.
  • the process of constructing the visual classification task of the first stimulus intensity, the auditory classification task of the first stimulus intensity, and the visual and auditory classification task of the first stimulus intensity is different from the above-mentioned construction of the visual classification task, the auditory classification task and the visual and auditory classification.
  • the task process is similar.
  • the above-mentioned construction of the visual classification task, the auditory classification task and the visual and auditory classification task are based on the first picture, the first sound, the second picture and the second sound. In this embodiment, they are based on the first stimulation intensity. The picture and sound construct of the first stimulus intensity are obtained.
  • the specific process can be referred to the above-mentioned process of constructing a visual classification task, an auditory classification task, and a visual and auditory classification task, and will not be described again here.
  • the constructed visual classification task of the first stimulus intensity contains 50 pictures of the first stimulus intensity
  • the auditory classification task of the first stimulus intensity contains 50 sounds of the first stimulus intensity
  • the visual and auditory classification task of the first stimulus intensity included 50 pairs of audio-visual stimuli.
  • S405 Construct a perceptual decision-making task of the second stimulus intensity based on the picture of the second stimulus intensity and the sound of the second stimulus intensity.
  • the perceptual decision-making tasks of the second stimulus intensity include the visual classification task of the second stimulus intensity, the auditory classification task of the second stimulus intensity, and the visual and auditory classification task of the second stimulus intensity.
  • the process of constructing the visual classification task of the second stimulus intensity, the auditory classification task of the second stimulus intensity, and the visual and auditory classification task of the second stimulus intensity is different from the above-mentioned construction of the visual classification task, the auditory classification task, and the visual and auditory classification.
  • the task process is similar.
  • the above-mentioned construction of the visual classification task, the auditory classification task and the visual and auditory classification task are based on the first picture, the first sound, the second picture and the second sound. In this embodiment, they are based on the second stimulation intensity. The picture and the sound of the second stimulus intensity are constructed.
  • the specific process can be referred to the above-mentioned process of constructing a visual classification task, an auditory classification task, and a visual and auditory classification task, and will not be described again here.
  • the constructed visual classification task of the second stimulus intensity contains 50 pictures of the second stimulus intensity
  • the auditory classification task of the second stimulus intensity contains 50 sounds of the second stimulus intensity
  • the visual and auditory task of the second stimulus intensity included 50 pairs of audio-visual stimuli.
  • different users are trained using the constructed perceptual decision-making task of the first stimulation intensity and the perceptual decision-making task of the second stimulation intensity.
  • perceptual decision-making tasks with different stimulation intensities are constructed.
  • Perceptual decision-making tasks with different stimulation intensities can be used for training for different users, and the perceptual decision-making abilities of different users can be improved in a targeted manner.
  • the training method provided by this application may also include: adjusting the difficulty of the perceptual decision-making task according to the training results, thereby more effectively improving the user's perceptual decision-making ability.
  • the user's accuracy in completing a perceptual decision-making task is greater than the preset accuracy rate, it proves that the user's current training effect is good and the difficulty of the perceptual decision-making task can be increased. For example, you can gradually increase the types of pictures and sounds in the perceptual decision-making task, shorten the preset time interval between two adjacent classification tasks, increase the options corresponding to each classification task, etc.
  • the user's accuracy in completing the perceptual decision-making task is less than or equal to the preset accuracy rate, it proves that the user's current training effect is not good, and the difficulty of the perceptual decision-making task can be reduced.
  • the types of pictures and sounds in perceptual decision-making tasks can be reduced, and the preset time interval between two adjacent classification tasks can be increased.
  • the training method provided in this application can also use a competitive model to study the impact of cross-channels on patients with Alzheimer's disease. Individuals respond faster when visual and auditory information are presented simultaneously compared to single-channel information (such as visual information or auditory information). This phenomenon is called the redundant signal effect (RSE).
  • RSE redundant signal effect
  • RSE can be explained by statistical facilitation, that is, individuals respond to the single-channel stimulus (visual stimulus, auditory stimulus) that reaches the sensory threshold first among multi-sensory channel stimuli (visual and auditory stimuli), resulting in dual-channel information even though Reactions to stimuli can be accelerated without integration occurring.
  • the individual can first reach the sensory threshold in multi-sensory channel stimulation (visual and auditory stimulation), thereby improving the individual's perceptual decision-making ability.
  • FIG. 7 is a schematic diagram of a training device provided by an embodiment of the present application. As shown in Figure 7, the training device provided by this embodiment includes:
  • the display unit 510 is used to randomly display perceptual decision-making tasks.
  • the perceptual decision-making tasks include visual classification tasks, auditory classification tasks, and visual and auditory classification tasks.
  • the visual classification tasks include classifying M first pictures respectively.
  • the auditory classification tasks The task includes classifying N first sounds respectively, and the audio-visual classification task includes classifying L audio-visual stimulus pairs, each of the audio-visual stimulus pairs including a second picture and a target corresponding to the second picture.
  • the second sound where M ⁇ 2, N ⁇ 2, L ⁇ 2;
  • the collection unit 520 is used to collect behavioral response data generated by the user when completing the perceptual decision-making task
  • the determining unit 530 is configured to determine training results according to the behavioral response data, where the training results include the accuracy of the user completing the perceptual decision-making task.
  • the behavioral response data includes classification results corresponding to each classification task in the perceptual decision-making task and reaction times for completing each classification task.
  • the training device also includes:
  • An evaluation unit is used to input the classification result and the reaction time into a preset drift diffusion model for processing, and obtain the drift rate, decision boundary and non-decision time; according to the drift rate, the decision boundary and the The non-decision time is used to evaluate the user's perceived decision-making ability.
  • the training device also includes:
  • a state determination unit is configured to determine the health state of the user based on the user's perceptual decision-making ability.
  • the training device also includes:
  • the first construction unit is used to obtain M preset pictures; adjust the basic attributes of each preset picture to obtain M first pictures; and construct the visual classification task based on the M first pictures.
  • the training device also includes:
  • the second construction unit is used to obtain N preset sounds; adjust the sound attributes of each preset sound to obtain N first sounds; and construct the auditory classification task based on the N first sounds.
  • the training device also includes:
  • a third construction unit configured to determine L second pictures among the M first pictures; determine L second sounds among the N first sounds; and combine the L second pictures and the L second sounds are paired to obtain the L audio-visual stimulus pairs; the audio-visual classification task is constructed based on the L audio-visual stimulus pairs.
  • the training device also includes:
  • a third construction unit is used to determine the stimulation intensity corresponding to each of the first pictures and each of the first sounds.
  • the stimulation intensity is used to reflect each of the first pictures and each of the first sounds. The corresponding accuracy rate when the sounds are classified; select the picture with the stimulation intensity as the first stimulation intensity and the picture with the stimulation intensity as the second stimulation intensity among the M first pictures; select from the N first sounds
  • the stimulation intensity is the sound of the first stimulation intensity and the stimulation intensity is the sound of the second stimulation intensity; according to the picture of the first stimulation intensity and the sound of the first stimulation intensity, a perceptual decision-making task of the first stimulation intensity is constructed; according to The picture of the second stimulation intensity and the sound of the second stimulation intensity construct a perceptual decision-making task of the second stimulation intensity.
  • FIG. 8 is a schematic diagram of a training device provided by another embodiment of the present application.
  • the training device 6 of this embodiment includes: a processor 60 , a memory 61 and a computer program 62 stored in the memory 61 and executable on the processor 60 .
  • the processor 60 executes the computer program 62, it implements the steps in each of the above training method embodiments, such as S101 to S103 shown in FIG. 1 .
  • the processor 60 executes the computer program 62, it implements the functions of each unit in the above embodiments, such as the functions of units 510 to 530 shown in FIG. 7 .
  • the computer program 62 may be divided into one or more units, and the one or more units are stored in the memory 61 and executed by the processor 60 to complete the present application.
  • the one or more units may be a series of computer instruction segments capable of completing specific functions.
  • the instruction segments are used to describe the execution process of the computer program 62 in the training device 6 .
  • the computer program 62 can be divided into a display unit, a collection unit and a determination unit, and the specific functions of each unit are as described above.
  • the training device may include, but is not limited to, a processor 60 and a memory 61 .
  • FIG. 8 is only an example of the training device 6 and does not constitute a limitation of the device. It may include more or fewer components than shown in the figure, or combine certain components, or different components, such as
  • the training equipment may also include input and output equipment, network access equipment, buses, etc.
  • the processor 60 may be a central processing unit (Central Processing Unit). Processing Unit (CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor (DSP), Application Specific Integrated Circuit (Application Specific Integrated Circuit (ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the memory 61 may be an internal storage unit of the training device, such as a hard disk or memory of the device.
  • the memory 61 may also be an external storage terminal of the training device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), or a secure digital card equipped on the training device. Digital, SD) card, flash memory card (Flash Card) etc.
  • the memory 61 may also include both an internal storage unit of the device and an external storage terminal.
  • the memory 61 is used to store the computer instructions and other programs and data required by the terminal.
  • the memory 61 can also be used to temporarily store data that has been output or is to be output.
  • Embodiments of the present application also provide a computer storage medium.
  • the computer storage medium may be non-volatile or volatile.
  • the computer storage medium stores a computer program. When the computer program is executed by a processor, the above trainings are implemented. Steps in method embodiments.
  • This application also provides a computer program product.
  • the computer program product When the computer program product is run on a training device, it causes the device to perform the steps in each of the above training method embodiments.
  • Embodiments of the present application also provide a chip or integrated circuit.
  • the chip or integrated circuit includes: a processor, configured to call and run a computer program from a memory, so that the training equipment installed with the chip or integrated circuit performs each of the above trainings. Steps in method embodiments.
  • Module completion means dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above.
  • Each functional unit and module in the embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above-mentioned integrated unit can be hardware-based. It can also be implemented in the form of software functional units.
  • the specific names of each functional unit and module are only for the convenience of distinguishing each other and are not used to limit the scope of protection of the present application.
  • For the specific working processes of the units and modules in the above system please refer to the corresponding processes in the foregoing method embodiments, and will not be described again here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

A training method, a training apparatus, a training device, and a storage medium, which relate to the technical field of computers. The training method comprises: randomly displaying a perceptual decision-making task, which comprises a visual classification task, an auditory classification task, and a visual-auditory classification task, wherein the visual classification task comprises respectively classifying M first pictures, the auditory classification task comprises respectively classifying N first sounds, and the visual-auditory classification task comprises respectively classifying L visual-auditory stimulation pairs, each visual-auditory stimulation pair comprising a second picture and a second sound corresponding to a target in the second picture; collecting behavior reaction data generated when a user completes the perceptual decision-making task; and determining a training result according to the behavior reaction data, wherein the training result comprises the accuracy of the user completing the perceptual decision-making task. In the training method, training is performed in a multi-channel combination mode by means of visual, auditory and visual-auditory channels, such that the perceptual decision-making capability of an individual can be effectively improved.

Description

训练方法、训练装置、训练设备及存储介质Training methods, training devices, training equipment and storage media 技术领域Technical field
本申请涉及计算机技术领域,尤其涉及一种训练方法、装置、设备及存储介质。The present application relates to the field of computer technology, and in particular, to a training method, device, equipment and storage medium.
背景技术Background technique
个体在衰老过程中会经历一定程度的认知衰退,阿尔茨海默病(AD)就是一种起病隐匿的进行性发展的神经系统退行性疾病。临床上表现为记忆障碍、失语、失用、失认、视空间技能损害、执行功能障碍以及人格和行为改变等全面性痴呆。Individuals will experience a certain degree of cognitive decline during the aging process. Alzheimer's disease (AD) is a neurodegenerative disease with an insidious onset and progressive development. Clinical manifestations include comprehensive dementia such as memory impairment, aphasia, apraxia, agnosia, impairment of visuospatial skills, executive dysfunction, and changes in personality and behavior.
对于老年人老说,阿尔茨海默病会降低其的知觉决策能力,导致其在需要投入感知觉能力、注意资源和记忆等高级认知能力的决策任务上表现不佳。For the elderly, Alzheimer's disease will reduce their perceptual decision-making ability, causing them to perform poorly on decision-making tasks that require the investment of perceptual abilities, attention resources, memory and other high-level cognitive abilities.
技术问题technical problem
现有技术中对阿尔茨海默症患者知觉决策的研究,大多只停留在单独的视觉通道,如对个体在视觉层面进行训练。然而,这种单一通道的训练局限性太强,无法有效地提升个体的知觉决策能力。Most of the existing research on the perceptual decision-making of patients with Alzheimer's disease only focuses on a single visual channel, such as training individuals at the visual level. However, this single-channel training is too limited to effectively improve an individual's perceptual decision-making ability.
技术解决方案Technical solutions
有鉴于此,本申请提供训练方法、训练装置、训练设备及存储介质,可以有效地提升个体的知觉决策能力。In view of this, this application provides training methods, training devices, training equipment and storage media, which can effectively improve an individual's perceptual decision-making ability.
第一方面,本申请提供一种训练方法,包括:随机展示知觉决策任务,该知觉决策任务包括视觉分类任务、听觉分类任务以及视听觉分类任务,该视觉分类任务包括对M个第一图片分别分类,该听觉分类任务包括对N个第一声音分别分类,该视听觉分类任务包括对L个视听刺激对分别分类,每个该视听刺激对包括第二图片以及与该第二图片中的目标对应的第二声音,其中,M≥2,N≥2,L≥2;采集用户在完成该知觉决策任务时产生的行为反应数据;根据该行为反应数据确定训练结果,该训练结果包括该用户完成该知觉决策任务的正确率。In a first aspect, this application provides a training method, including: randomly displaying a perceptual decision-making task. The perceptual decision-making task includes a visual classification task, an auditory classification task, and a visual and auditory classification task. The visual classification task includes classifying M first pictures respectively. Classification, the auditory classification task includes classifying N first sounds respectively, the audio-visual classification task includes classifying L audio-visual stimulus pairs, each of the audio-visual stimulus pairs includes a second picture and a target in the second picture The corresponding second sound, where M≥2, N≥2, L≥2; collect the behavioral response data generated by the user when completing the perceptual decision-making task; determine the training results based on the behavioral response data, and the training results include the user The accuracy of completing this perceptual decision-making task.
在一个可能的实现方式中,行为反应数据包括知觉决策任务中各个分类任务对应的分类结果和完成各个分类任务的反应时间,该训练方法还包括:将分类结果和反应时间输入到预设的漂移扩散模型中进行处理,得到漂移率、决策边界以及非决策时间;根据漂移率、决策边界以及非决策时间,评估该用户的知觉决策能力。In a possible implementation, the behavioral response data includes classification results corresponding to each classification task in the perceptual decision-making task and reaction times for completing each classification task. The training method also includes: inputting the classification results and reaction times into a preset drift Process in the diffusion model to obtain the drift rate, decision boundary and non-decision time; based on the drift rate, decision boundary and non-decision time, the user's perceptual decision-making ability is evaluated.
在一个可能的实现方式中,该训练方法还包括:根据用户的知觉决策能力,确定用户的健康状态。In a possible implementation, the training method also includes: determining the user's health status based on the user's perceptual decision-making ability.
在一个可能的实现方式中,该训练方法还包括:获取M个预设图片;调整每个预设图片的基本属性,得到M个第一图片;根据M个第一图片构建所述视觉分类任务。In a possible implementation, the training method also includes: obtaining M preset pictures; adjusting the basic attributes of each preset picture to obtain M first pictures; and constructing the visual classification task based on the M first pictures. .
在一个可能的实现方式中,该训练方法还包括:获取N个预设声音;调整每个预设声音的声音属性,得到N个第一声音;根据N个第一声音构建听觉分类任务。In a possible implementation, the training method further includes: obtaining N preset sounds; adjusting the sound attributes of each preset sound to obtain N first sounds; and constructing an auditory classification task based on the N first sounds.
在一个可能的实现方式中,该训练方法还包括:在M个第一图片中确定L个第二图片;在N个第一声音中确定L个第二声音;对L个第二图片和L个第二声音进行配对,得到L个视听刺激对;根据L个视听刺激对构建视听觉分类任务。In a possible implementation, the training method further includes: determining L second pictures among the M first pictures; determining L second sounds among the N first sounds; comparing the L second pictures and L Pair the second sounds to obtain L audio-visual stimulus pairs; construct an audio-visual classification task based on the L audio-visual stimulus pairs.
在一个可能的实现方式中,该训练方法还包括:确定每个所述第一图片和每个所述第一声音分别对应的刺激强度,所述刺激强度用于反映每个所述第一图片和每个所述第一声音被分类时各自对应的正确率;In a possible implementation, the training method further includes: determining the stimulation intensity corresponding to each of the first pictures and each of the first sounds, and the stimulation intensity is used to reflect each of the first pictures. and the corresponding accuracy rate when each first sound is classified;
在所述M个第一图片中选取刺激强度为第一刺激强度的图片和刺激强度为第二刺激强度的图片;Select the picture whose stimulation intensity is the first stimulation intensity and the picture whose stimulation intensity is the second stimulation intensity among the M first pictures;
在所述N个第一声音中选取刺激强度为第一刺激强度的声音和刺激强度为第二刺激强度的声音;Select the sound whose stimulation intensity is the first stimulation intensity and the sound whose stimulation intensity is the second stimulation intensity among the N first sounds;
根据所述第一刺激强度的图片和所述第一刺激强度的声音,构建第一刺激强度的知觉决策任务;Construct a perceptual decision-making task of the first stimulation intensity based on the picture of the first stimulation intensity and the sound of the first stimulation intensity;
根据所述第二刺激强度的图片和所述第二刺激强度的声音,构建第二刺激强度的知觉决策任务。Based on the picture of the second stimulation intensity and the sound of the second stimulation intensity, a perceptual decision-making task of the second stimulation intensity is constructed.
第二方面,本申请提供一种训练装置,包括:In a second aspect, this application provides a training device, including:
展示单元,用于随机展示知觉决策任务,知觉决策任务包括视觉分类任务、听觉分类任务以及视听觉分类任务,视觉分类任务包括对M个第一图片分别分类,听觉分类任务包括对N个第一声音分别分类,视听觉分类任务包括对L个视听刺激对分别分类,每个视听刺激对包括第二图片以及与第二图片中的目标对应的第二声音,其中,M≥2,N≥2,L≥2;The display unit is used to randomly display perceptual decision-making tasks. The perceptual decision-making tasks include visual classification tasks, auditory classification tasks and visual and auditory classification tasks. The visual classification tasks include classifying the M first pictures respectively, and the auditory classification tasks include classifying the N first pictures. Sounds are classified separately. The audio-visual classification task includes classifying L audio-visual stimulus pairs respectively. Each audio-visual stimulus pair includes a second picture and a second sound corresponding to the target in the second picture, where M≥2, N≥2 ,L≥2;
采集单元,用于采集用户在完成知觉决策任务时产生的行为反应数据;The collection unit is used to collect behavioral response data generated by users when completing perceptual decision-making tasks;
确定单元,用于根据行为反应数据确定训练结果,训练结果包括用户完成知觉决策任务的正确率。The determination unit is used to determine the training results based on the behavioral response data, and the training results include the accuracy of the user completing the perceptual decision-making task.
第三方面,本申请提供一种训练设备,包括存储器、处理器以及存储在该存储器中并可在该处理器上运行的计算机程序,该处理器执行该计算机程序时实现上述第一方面中任一方式所述的训练方法。In a third aspect, the present application provides a training device, including a memory, a processor, and a computer program stored in the memory and executable on the processor. When the processor executes the computer program, any one of the above aspects in the first aspect is implemented. The training method described in one method.
第四方面,本申请提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述第一方面中任一方式所述的训练方法。In a fourth aspect, the present application provides a computer-readable storage medium that stores a computer program. When the computer program is executed by a processor, the training method described in any of the above-mentioned first aspects is implemented.
第五方面,本申请实施例提供一种计算机程序产品,当计算机程序产品在处理器上运行时,使得处理器执行上述第一方面中任一方式所述的训练方法。In a fifth aspect, embodiments of the present application provide a computer program product. When the computer program product is run on a processor, it causes the processor to execute the training method described in any of the above-mentioned first aspects.
有益效果beneficial effects
本申请提供的训练方法,向用户随机展示知觉决策任务,并基于该知觉决策任务对用户进行训练。在训练的过程中,采集用户在完成该知觉决策任务时产生的行为反应数据,根据该行为反应数据可以确定训练结果,如确定用户完成知觉决策任务的正确率。由于该知觉决策任务包含了视觉、听觉以及视听觉多个通道上的分类任务,利用这种知觉决策任务对用户进行训练,能够加速用户在高阶认知过程中的信息存储和编码,能够提高用户的反应速度,进而促进知觉决策的形成,从而有效地提升个体的知觉决策能力。The training method provided by this application randomly displays perceptual decision-making tasks to users, and trains users based on the perceptual decision-making tasks. During the training process, the behavioral response data generated by the user when completing the perceptual decision-making task is collected. Based on the behavioral response data, the training results can be determined, such as determining the accuracy of the user in completing the perceptual decision-making task. Since this perceptual decision-making task includes classification tasks on multiple channels of vision, hearing, and visual and auditory, using this perceptual decision-making task to train users can accelerate the user's information storage and encoding in the high-order cognitive process, and can improve The user's reaction speed, in turn, promotes the formation of perceptual decision-making, thereby effectively improving the individual's perceptual decision-making ability.
附图说明Description of the drawings
图1是本申请一示例性实施例提供的一种训练方法的示意性流程图;Figure 1 is a schematic flow chart of a training method provided by an exemplary embodiment of the present application;
图2是本申请实施例提供的第一图片示意图;Figure 2 is a first schematic diagram provided by an embodiment of the present application;
图3是本申请实施例提供的第一声音示意图;Figure 3 is a first sound diagram provided by an embodiment of the present application;
图4是本申请实施例提供的视听刺激对示意图;Figure 4 is a schematic diagram of an audio-visual stimulus pair provided by an embodiment of the present application;
图5是本申请又一示例性实施例示出的一种训练方法的具体流程图;Figure 5 is a specific flow chart of a training method according to another exemplary embodiment of the present application;
图6是本申请再一示例性实施例示出的一种训练方法的具体流程图;Figure 6 is a specific flow chart of a training method according to yet another exemplary embodiment of the present application;
图7为本申请实施例提供的训练装置的示意图;Figure 7 is a schematic diagram of a training device provided by an embodiment of the present application;
图8是本申请另一实施例提供的训练设备的示意图。Figure 8 is a schematic diagram of a training device provided by another embodiment of the present application.
本发明的实施方式Embodiments of the invention
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments These are part of the embodiments of this application, but not all of them. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of this application.
个体在衰老过程中会经历一定程度的认知衰退,阿尔茨海默病(AD)就是一种起病隐匿的进行性发展的神经系统退行性疾病。临床上表现为记忆障碍、失语、失用、失认、视空间技能损害、执行功能障碍以及人格和行为改变等全面性痴呆。Individuals will experience a certain degree of cognitive decline during the aging process. Alzheimer's disease (AD) is a neurodegenerative disease with an insidious onset and progressive development. Clinical manifestations include comprehensive dementia such as memory impairment, aphasia, apraxia, agnosia, impairment of visuospatial skills, executive dysfunction, and changes in personality and behavior.
知觉决策是将感知觉的信息转化为目标导向,并做出反应的连续分层的认知操作,包括从感觉信息(如直接作用于感觉器官的客观事物所产生的信)的编码、积累决策信息应用决策规则作出决定,到最后产生行为反应。例如,用户看到一张图片,判断图片中的内容为动物,在预设的选项中选择动物选项,这一整个过程称为知觉决策。Perceptual decision-making is a continuous hierarchical cognitive operation that converts sensory information into goal-oriented and responds, including encoding and accumulation of decision-making from sensory information (such as information generated by objective things that directly act on sensory organs) Information is applied to make decisions using decision rules, culminating in behavioral responses. For example, the user sees a picture, determines that the content in the picture is an animal, and selects the animal option among the preset options. This entire process is called perceptual decision-making.
对于老年人老说,阿尔茨海默病会降低其的知觉决策能力,导致其在需要投入感知觉能力、注意资源和记忆等高级认知能力的决策任务上表现不佳。For the elderly, Alzheimer's disease will reduce their perceptual decision-making ability, causing them to perform poorly on decision-making tasks that require the investment of perceptual abilities, attention resources, memory and other high-level cognitive abilities.
现有技术中对阿尔茨海默症患者知觉决策的研究,大多只停留在单独的视觉通道,如对个体在视觉层面进行训练。然而,这种单一通道的训练局限性太强,无法有效地提升个体的知觉决策能力。Most of the existing research on the perceptual decision-making of patients with Alzheimer's disease only focuses on a single visual channel, such as training individuals at the visual level. However, this single-channel training is too limited to effectively improve an individual's perceptual decision-making ability.
有鉴于此,本申请提供一种训练方法、训练装置、训练设备及存储介质。通过向用户随机展示知觉决策任务,并基于该知觉决策任务对用户进行训练。在训练的过程中,采集用户在完成该知觉决策任务时产生的行为反应数据,根据该行为反应数据可以确定训练结果,如确定用户完成知觉决策任务的正确率。由于该知觉决策任务包含了视觉、听觉以及视听觉多个通道上的分类任务,利用这种知觉决策任务对用户进行训练,能够加速用户在高阶认知过程中的信息存储和编码,能够提高用户的反应速度,进而促进知觉决策的形成,从而有效地提升个体的知觉决策能力。In view of this, this application provides a training method, training device, training equipment and storage medium. By randomly presenting perceptual decision-making tasks to users, the users are trained based on the perceptual decision-making tasks. During the training process, the behavioral response data generated by the user when completing the perceptual decision-making task is collected. Based on the behavioral response data, the training results can be determined, such as determining the accuracy of the user in completing the perceptual decision-making task. Since this perceptual decision-making task includes classification tasks on multiple channels of vision, hearing, and visual and auditory, using this perceptual decision-making task to train users can accelerate the user's information storage and encoding in the high-order cognitive process, and can improve The user's reaction speed, in turn, promotes the formation of perceptual decision-making, thereby effectively improving the individual's perceptual decision-making ability.
下面以具体地实施例对本申请的技术方案进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。The technical solution of the present application will be described in detail below with specific examples. The following specific embodiments can be combined with each other, and the same or similar concepts or processes may not be described again in some embodiments.
本申请实施例提供了一种训练软件。该训练软件可以安装在训练设备中,训练设备可以是能够显示图片并具备音频播放功能的设备,例如智能手机、平板电脑、台式电脑、笔记本电脑、机器人、智能穿戴等设备。本申请提供的训练软件既可以对用户进行训练,还可以对用户训练前或者经过训练后的知觉决策能力进行测试。The embodiment of this application provides training software. The training software can be installed in a training device, which can be a device that can display pictures and have audio playback functions, such as smartphones, tablets, desktop computers, laptops, robots, smart wearables and other devices. The training software provided by this application can not only train users, but also test the user's perceptual decision-making ability before or after training.
请参见图1,图1是本申请一示例性实施例提供的一种训练方法的示意性流程图。如图1所示的训练方法可包括:S101~S103,具体如下:Please refer to Figure 1, which is a schematic flow chart of a training method provided by an exemplary embodiment of the present application. The training method as shown in Figure 1 may include: S101~S103, specifically as follows:
S101:随机展示知觉决策任务。S101: Random presentation of perceptual decision-making tasks.
知觉决策任务包括视觉分类任务、听觉分类任务以及视听觉分类任务。Perceptual decision-making tasks include visual classification tasks, auditory classification tasks, and visual and auditory classification tasks.
视觉分类任务包括对M个第一图片分别分类,M≥2。其中,M表示第一图片的数量,如M可以为大于或等于2的正整数。第一图片可以是包含任意物体的图片,例如,第一图片可以是包含面孔的图片、包含汽车的图片、包含动物的图片、包含植物的图片、包含建筑的图片、包含食物的图片、包含日用品的图片、包含电子设备的图片、包含乐器的图片等。可以根据实际的训练需求增加不同种类的第一图片,此处仅为示例性说明,对此不做限定。The visual classification task includes classifying M first images respectively, M≥2. Among them, M represents the number of the first picture. For example, M can be a positive integer greater than or equal to 2. The first picture may be a picture containing any object. For example, the first picture may be a picture containing faces, a picture containing cars, a picture containing animals, a picture containing plants, a picture containing buildings, a picture containing food, or containing daily necessities. of images, images containing electronic devices, images containing musical instruments, etc. Different types of first pictures can be added according to actual training needs. This is only an illustrative description without limitation.
请参见图2,图2是本申请实施例提供的第一图片示意图。如图2所示,图2展示了视觉分类任务中的一个第一图片,该第一图片为包含面孔的图片。Please refer to Figure 2. Figure 2 is a first schematic diagram provided by an embodiment of the present application. As shown in Figure 2, Figure 2 shows a first picture in the visual classification task, which is a picture containing a face.
对第一图片的获取渠道不做限定。例如,第一图片可以是通过拍摄得到的,也可以是在网络中搜集得到的,还可以是通过绘画得到的等。There are no restrictions on the channels through which the first image can be obtained. For example, the first picture may be obtained by taking a photo, may be collected from the Internet, may be obtained by painting, etc.
听觉分类任务包括对N个第一声音分别分类,N≥2。其中,N表示第一声音的数量,如N可以为大于或等于2的正整数。第一声音可以是包含任意声音的音频,例如,第一声音可以是包含人物声音的音频、包含汽车声音的音频、包含动物声音的音频、包含电子设备声音的音频、包含乐器声音的音频等。可以根据实际的训练需求增加不同种类的第一声音,此处仅为示例性说明,对此不做限定。The auditory classification task involves classifying N first sounds respectively, N≥2. Wherein, N represents the number of first sounds, for example, N can be a positive integer greater than or equal to 2. The first sound may be audio containing any sound. For example, the first sound may be audio containing the sound of a person, audio containing the sound of a car, audio containing the sound of an animal, audio containing the sound of an electronic device, audio containing the sound of an instrument, etc. Different types of first sounds can be added according to actual training needs. This is only an exemplary description and is not limited.
请参见图3,图3是本申请实施例提供的第一声音示意图。如图3所示,图3展示了听觉分类任务中的一个第一声音,该第一声音为包含人物声音的音频,具体为包含小女孩说话声的音频。Please refer to Figure 3. Figure 3 is a first sound diagram provided by an embodiment of the present application. As shown in Figure 3, Figure 3 shows a first sound in the auditory classification task. The first sound is an audio containing a character's voice, specifically an audio containing a little girl's voice.
对第一声音的获取渠道不做限定。例如,第一声音可以是通过录制得到的,也可以是在网络中搜集得到的等。There are no restrictions on the channels through which First Voice can be obtained. For example, the first sound may be obtained by recording, or may be collected from the Internet, etc.
视听觉分类任务包括对L个视听刺激对分别分类,每个视听刺激对包括第二图片以及与第二图片中的目标对应的第二声音,L≥2。其中,L表示视听刺激对的数量,如L可以为大于或等于2的正整数。The audio-visual classification task includes classifying L audio-visual stimulus pairs respectively, each audio-visual stimulus pair includes a second picture and a second sound corresponding to the target in the second picture, L≥2. Among them, L represents the number of audio-visual stimulus pairs. For example, L can be a positive integer greater than or equal to 2.
第二图片可以是包含任意目标的图片,例如,第二图片可以是包含面孔的图片、包含汽车的图片、包含动物的图片、包含乐器的图片等。相应地,第二声音可以是包含人物声音的音频、包含汽车声音的音频、包含动物声音的音频、包含乐器声音的音频等。The second picture may be a picture containing any object. For example, the second picture may be a picture containing faces, a picture containing cars, a picture containing animals, a picture containing musical instruments, etc. Correspondingly, the second sound may be audio containing human voices, audio containing car sounds, audio containing animal sounds, audio containing musical instrument sounds, etc.
示例性的,包含面孔的图片和该面孔对应的声音的音频组成一个视听刺激对,包含汽车的图片和该汽车对应的声音的音频组成一个视听刺激对,包含动物的图片和该动物对应的声音的音频组成一个视听刺激对,包含乐器的图片和该乐器对应的声音的音频组成一个视听刺激对。For example, a picture of a face and the audio of the sound corresponding to the face form an audio-visual stimulus pair, the picture of a car and the audio of the sound corresponding to the car form an audio-visual stimulus pair, and the picture of an animal and the sound corresponding to the animal form an audio-visual stimulus pair. The audio of the instrument forms an audio-visual stimulus pair, and the audio containing the picture of the musical instrument and the audio corresponding to the sound of the instrument forms an audio-visual stimulus pair.
请参见图4,图4是本申请实施例提供的视听刺激对示意图。如图4所示,图4展示了视听觉分类任务中的一个视听刺激对,该视听刺激对包含一个第二图片,以及与该第二图片中的目标对应的第二声音。第二图片为包含汽车的图片,第二声音为包含汽车声音的音频,具体为包含汽车鸣笛声的音频。Please refer to Figure 4, which is a schematic diagram of an audio-visual stimulus pair provided by an embodiment of the present application. As shown in Figure 4, Figure 4 shows an audio-visual stimulus pair in the audio-visual classification task. The audio-visual stimulus pair includes a second picture and a second sound corresponding to the target in the second picture. The second picture is a picture containing a car, and the second sound is an audio containing a car sound, specifically an audio containing a car horn.
对视听刺激对的获取渠道不做限定。例如,视听刺激对中的第二图片可以是从第一图片中挑选出来的,也可以是重新拍摄的,也可以是在网络中搜集得到的,还可以是通过绘画得到的。视听刺激对中的第二声音可以是从第一声音中挑选出来的,也可以是通过录制得到的,还可以是在网络中搜集得到的。There are no restrictions on the channels for obtaining audio-visual stimulation pairs. For example, the second picture in the audio-visual stimulus pair can be selected from the first picture, or it can be re-photographed, or it can be collected from the Internet, or it can be obtained through painting. The second sound in the audio-visual stimulus pair can be selected from the first sound, can be recorded, or can be collected from the Internet.
在一种可能的实现方式中,用户启动训练设备上安装的训练软件后,选择训练选项,训练设备的显示界面中开始随机展示知觉决策任务。其中,用户在选择训练选项时,可以手动点击,也可以遥控操作,还可以语音控制。In one possible implementation, after the user starts the training software installed on the training device, selects a training option, and the perceptual decision-making task starts to be displayed randomly in the display interface of the training device. Among them, when users select training options, they can click manually, operate remotely, or use voice control.
示例性地,训练设备的显示界面中央呈现注视点,该注视点的呈现时长可以自行设定,例如可以设置为2000ms,然后开始随机展示视觉分类任务、听觉分类任务以及视听觉分类任务。一种展示方式可以是,展示完其中一个任务后,再展示另一个任务,直至所有的任务展示完毕。例如,先展示视觉分类任务,该视觉分类任务中的M个第一图片都展示完毕后,再展示听觉分类任务,该听觉分类任务中的N个第一声音都展示完毕后,再展示视听觉分类任务,直至该视听觉分类任务中的L个视听刺激对都展示完毕。For example, a gaze point is presented in the center of the display interface of the training device. The presentation duration of the gaze point can be set by oneself, for example, it can be set to 2000 ms, and then the visual classification task, the auditory classification task, and the visual and auditory classification tasks are randomly displayed. One way to display it can be to display one task and then another task until all tasks are displayed. For example, the visual classification task is shown first. After all the M first pictures in the visual classification task are shown, the auditory classification task is shown. After the N first sounds in the auditory classification task are shown, the visual and auditory tasks are shown again. Classification task until all L audio-visual stimulus pairs in the audio-visual classification task are displayed.
值得说明的是,对视觉分类任务、听觉分类任务以及视听觉分类任务的展示顺序不做限定。例如,展示顺序可以是视觉分类任务、听觉分类任务、视听觉分类任务,也可以是视觉分类任务、视听觉分类任务、听觉分类任务,还可以是视听觉分类任务、视觉分类任务、听觉分类任务等等,对此不做限定。It is worth noting that there is no limit to the order in which visual classification tasks, auditory classification tasks, and visual and auditory classification tasks are presented. For example, the display order can be a visual classification task, an auditory classification task, a visual and auditory classification task, or it can be a visual classification task, a visual and auditory classification task, an auditory classification task, or it can be a visual and auditory classification task, a visual classification task, an auditory classification task. Wait, there is no limit to this.
示例性地,另一种展示方式可以是,视觉分类任务、听觉分类任务以及视听觉分类任务穿插展示,即将M个第一图片、N个第一声音以及L个视听刺激对穿插展示,直至所有的任务展示完毕。例如,先展示若干个第一图片,再展示若干个视听刺激对,再展示若干个第一声音,再展示若干个第一图片,再展示若干个声音等等,直至所有的任务展示完毕。For example, another display method may be to intersperse the visual classification task, the auditory classification task, and the audio-visual classification task, that is, the M first pictures, the N first sounds, and the L audio-visual stimulus pairs are interspersed until all The task is displayed. For example, first display several first pictures, then display several audio-visual stimulus pairs, then display several first sounds, then display several first pictures, then display several sounds, and so on until all tasks are displayed.
又例如,先展示一个第一图片,再展示一个第一声音,再展示一个视听刺激对,再展示一个第一声音,再展示一个第一图片等等。此处仅为示例性说明,对此不做限定。For another example, first display a first picture, then display a first sound, then display an audio-visual stimulus pair, then display a first sound, then display a first picture, and so on. This is only an illustrative description without limitation.
值得说明的是,为了保证训练结果的有效性,在展示视觉分类任务的中第一图片和展示视听觉分类任务中的第二图片时,将这些图片都呈现在统一颜色背景上,且呈现的视角都一样。例如,都呈现在灰色、白色等背景上,呈现的视角均为8°×8°。此处仅为示例性说明,对此不做限定。It is worth mentioning that in order to ensure the validity of the training results, when displaying the first picture in the visual classification task and the second picture in the visual and auditory classification task, these pictures are presented on a uniform color background, and the presented The perspective is the same. For example, they are all presented on gray, white and other backgrounds, and the presented viewing angles are 8°×8°. This is only an illustrative description without limitation.
S102:采集用户在完成知觉决策任务时产生的行为反应数据。S102: Collect the user's behavioral response data when completing the perceptual decision-making task.
示例性地,在训练设备的显示界面中,除了显示各个知觉决策任务,还会显示各个知觉决策任务对应的选项,在随机展示各个知觉决策任务的过程中,用户对每个分类任务做出选择操作,这些选择操作产生的数据即为行为反应数据,训练设备采集这些行为反应数据。For example, in the display interface of the training device, in addition to displaying each perceptual decision-making task, options corresponding to each perceptual decision-making task are also displayed. In the process of randomly displaying each perceptual decision-making task, the user makes a choice for each classification task. Operations, the data generated by these selection operations are behavioral response data, and the training device collects these behavioral response data.
例如,当前显示界面中展示的为视觉分类任务中的第一图片,第一图片的下面、或上面、或右侧、或左侧并排显示两个选项。当该第一图片为包含面孔的图片时,正确选择为点击并排显示两个选项中左边的选项;当该第一图片为包含汽车的图片时,正确选择为点击并排显示两个选项中右边的选项。For example, the first picture in the visual classification task is displayed in the current display interface, and two options are displayed side by side below, above, right, or left of the first picture. When the first picture is a picture containing a face, the correct choice is to click the left option among the two options displayed side by side; when the first picture is a picture containing a car, the correct choice is to click the right option among the two options displayed side by side. options.
又例如,当前训练设备播放听觉分类任务中的第一声音,显示界面中并排显示两个选项。当该第一声音为人物声音时,正确选择为点击并排显示两个选项中左边的选项;当该第一声音为汽车声音时,正确选择为点击并排显示两个选项中右边的选项。值得说明的是,在训练过程中,用户与训练设备的距离可以自行设置、调整。例如,用户距离显示界面和音箱60厘米。For another example, the current training device plays the first sound in the auditory classification task, and the display interface displays two options side by side. When the first sound is a human voice, the correct choice is to click the left option to display the two options side by side; when the first sound is the sound of a car, the correct choice is to click the right option to display the two options side by side. It is worth mentioning that during the training process, the distance between the user and the training equipment can be set and adjusted by himself. For example, the user is 60 cm away from the display interface and speakers.
用户在训练的过程中,根据自己的能力对不同的知觉决策任务做出不同的选择,即选择用户自己认为正确的选项。训练设备记录用户对每个分类任务做出的选择。During the training process, users make different choices for different perceptual decision-making tasks according to their own abilities, that is, they choose the option that the user thinks is correct. The training device records the choices made by the user for each classification task.
在一种可能的实现方式中,用户对每个分类任务做出选择操作,可以通过鼠标实现。例如,对于包含面孔的第一图片、包含人物声音的第一声音、包含面孔的第二图片以及该第二图片对应的包含人物声音的第二声音,这些分类任务的正确选择均为点击鼠标左键。对于包含汽车的第一图片、包含汽车声音的第一声音、包含汽车的第二图片以及该第二图片对应的包含汽车声音的第二声音,这些分类任务的正确选择均为点击鼠标右键。In a possible implementation, the user makes a selection operation for each classification task, which can be achieved through a mouse. For example, for the first picture containing a face, the first sound containing a human voice, the second picture containing a face, and the second sound corresponding to the second picture containing a human voice, the correct choice for these classification tasks is to click the left mouse button. key. For the first picture containing the car, the first sound containing the sound of the car, the second picture containing the car, and the second sound corresponding to the second picture containing the sound of the car, the correct choice for these classification tasks is to click the right mouse button.
例如,当前显示界面中展示的为视听觉分类任务中的第二图片,训练设备播放该第二图片对应的第二声音。该第二图片为包含面孔的第二图片,第二声音为包含人物声音的第二声音,正确选择为点击鼠标左键。For example, what is currently displayed in the display interface is the second picture in the visual and auditory classification task, and the training device plays the second sound corresponding to the second picture. The second picture is a second picture containing a face, the second sound is a second sound containing a character's voice, and the correct selection is to click the left mouse button.
用户在训练的过程中,根据自己的能力对不同的知觉决策任务做出不同的选择,即用户点击鼠标左键或右键。训练设备记录用户对每个分类任务做出的选择。During the training process, users make different choices for different perceptual decision-making tasks according to their own abilities, that is, the user clicks the left or right mouse button. The training device records the choices made by the user for each classification task.
可选地,在一种可能的实现方式中,为了保证训练的有效性,在展示知觉决策任务时,相邻两个分类任务可以基于预设时间间隔进行展示,且每个分类任务的展示时长为预设时长。示例性的,相邻两个分类任务之间的预设时间间隔可以为1200~1500ms,每个分类任务的展示时长可以为300ms。Optionally, in a possible implementation, in order to ensure the effectiveness of training, when displaying the perceptual decision-making task, two adjacent classification tasks can be displayed based on a preset time interval, and the display duration of each classification task is is the default duration. For example, the preset time interval between two adjacent classification tasks can be 1200 to 1500ms, and the display duration of each classification task can be 300ms.
例如,相邻两个视听刺激对之间的预设时间间隔可以为1200~1500ms,每个视听刺激对的展示时长可以为300~500ms。此处仅为示例性说明,对此不做限定。For example, the preset time interval between two adjacent pairs of audio-visual stimuli can be 1200-1500 ms, and the display duration of each pair of audio-visual stimuli can be 300-500 ms. This is only an illustrative description without limitation.
S103:根据行为反应数据确定训练结果。S103: Determine training results based on behavioral response data.
训练结果包括用户完成知觉决策任务的正确率。The training results include the accuracy of users completing perceptual decision-making tasks.
示例性地,行为反应数据是用户对每个分类任务做出选择操作产生的数据,将每个分类任务对应的行为反应数据与该任务对应的正确选择进行比较,根据比较结果确定训练结果。For example, the behavioral response data is data generated by the user making a selection operation for each classification task. The behavioral response data corresponding to each classification task is compared with the correct choice corresponding to the task, and the training result is determined based on the comparison result.
例如,每个正确选择记一分,未选择或选择错不计分,根据用户的行为反应数据可以得到一个分数,计算该分数占总分(所有任务均选择正确时对应的分数)的比例,得到用户完成知觉决策任务的正确率。此处仅为示例性说明,对此不做限定。For example, one point is scored for each correct choice, and no points are scored for no choice or wrong choice. A score can be obtained based on the user's behavioral response data, and the proportion of this score to the total score (the corresponding score when all tasks are selected correctly) is calculated, and we get The accuracy of users completing perceptual decision-making tasks. This is only an illustrative description without limitation.
本实施方式中,通过向用户随机展示知觉决策任务,并基于该知觉决策任务对用户进行训练。在训练的过程中,采集用户在完成该知觉决策任务时产生的行为反应数据,根据该行为反应数据可以确定训练结果,如确定用户完成知觉决策任务的正确率。由于该知觉决策任务包含了视觉、听觉以及视听觉多个通道上的分类任务,利用这种知觉决策任务对用户进行训练,能够加速用户在高阶认知过程中的信息存储和编码,能够提高用户的反应速度,进而促进知觉决策的形成,从而有效地提升个体的知觉决策能力。In this embodiment, the perceptual decision-making task is randomly displayed to the user, and the user is trained based on the perceptual decision-making task. During the training process, the behavioral response data generated by the user when completing the perceptual decision-making task is collected. Based on the behavioral response data, the training results can be determined, such as determining the accuracy of the user in completing the perceptual decision-making task. Since this perceptual decision-making task includes classification tasks on multiple channels of vision, hearing, and visual and auditory, using this perceptual decision-making task to train users can accelerate the user's information storage and encoding in the high-order cognitive process, and can improve The user's reaction speed, in turn, promotes the formation of perceptual decision-making, thereby effectively improving the individual's perceptual decision-making ability.
请参见图5,图5是本申请又一示例性实施例示出的一种训练方法的具体流程图,如图5所示的训练方法可包括:S201~S205,具体如下:Please refer to Figure 5. Figure 5 is a specific flow chart of a training method according to another exemplary embodiment of the present application. The training method shown in Figure 5 may include: S201~S205, specifically as follows:
S201:随机展示知觉决策任务。S201: Random presentation of perceptual decision-making tasks.
S202:采集用户在完成知觉决策任务时产生的行为反应数据。S202: Collect the user's behavioral response data when completing the perceptual decision-making task.
S203:根据行为反应数据确定训练结果。S203: Determine training results based on behavioral response data.
上述S201~S203与图1对应的实施例中的S101~S103完全相同,具体参考图1对应的实施例中的S101~S103的描述,此处不再赘述。The above S201 ~ S203 are exactly the same as S101 ~ S103 in the embodiment corresponding to FIG. 1 . For details, refer to the description of S101 ~ S103 in the embodiment corresponding to FIG. 1 , which will not be described again here.
行为反应数据包括知觉决策任务中各个分类任务对应的分类结果和完成各个分类任务的反应时间。Behavioral response data include the classification results corresponding to each classification task in the perceptual decision-making task and the reaction time to complete each classification task.
示例性地,各个分类任务对应的分类结果即为用户做出的选择操作。如用户点击了并排显示两个选项中左边的选项、用户点击了并排显示两个选项中右边的选项、用户点击了鼠标左键以及用户点击了鼠标右键。For example, the classification results corresponding to each classification task are the selection operations made by the user. For example, the user clicks on the left option of two options displayed side by side, the user clicks on the right option of two options displayed side by side, the user clicks the left button of the mouse, and the user clicks the right button of the mouse.
完成各个分类任务的反应时间由开始展示各个分类任务时的时间和用户做出选择时的时间确定。例如,针对某个分类任务,展示该分类任务时开始计时,用户做出选择后立刻结束,记录的这段时间即为该分类任务对应的反应时间。The reaction time to complete each category task was determined by the time when each category task was initially presented and the time when the user made a selection. For example, for a certain classification task, the timing starts when the classification task is displayed and ends immediately after the user makes a selection. The recorded time is the reaction time corresponding to the classification task.
S204:将分类结果和反应时间输入到预设的漂移扩散模型中进行处理,得到漂移率、决策边界以及非决策时间。S204: Input the classification results and reaction time into the preset drift diffusion model for processing, and obtain the drift rate, decision boundary and non-decision time.
预设的漂移扩散模型模拟了分类任务中的决策过程,用户的每个选择都表示为一个上边界和一个下边界,知觉决策过程会随着时间不断累积证据,直到它到达两个边界之一,随后引发相应的行为反应。The preset drift-diffusion model simulates the decision-making process in the classification task. Each choice of the user is represented as an upper boundary and a lower boundary. The perceptual decision-making process continues to accumulate evidence over time until it reaches one of the two boundaries. , and then trigger corresponding behavioral responses.
漂移率、决策边界以及非决策时间是由该漂移扩散模型对分类结果和反应时间进行处理后得到的不同参数。这些不同参数分别映射了知觉决策过程行为背后的认知加工过程。具体地,漂移率用于描述信息累积的速度,决策边界用于描述做出反应之前需要到达的反应边界,非决策时间用于描述感觉编码和运动反应的时间。The drift rate, decision boundary and non-decision time are different parameters obtained by processing the classification results and reaction time of the drift diffusion model. These different parameters respectively map the cognitive processing behind the behavior of the perceptual decision-making process. Specifically, the drift rate is used to describe the speed at which information is accumulated, the decision boundary is used to describe the response boundary that needs to be reached before a response is made, and the non-decision time is used to describe the time of sensory encoding and motor response.
不同反应的分布会影响漂移扩散模型中各个参数的数值,因此可以通过计算不同情况下漂移扩散模型的具体参数,反应用户在在进行跨通道知觉决策过程中的潜在认知过程,从而确定用户的训练效果。The distribution of different responses will affect the values of each parameter in the drift-diffusion model. Therefore, the specific parameters of the drift-diffusion model can be calculated under different situations to reflect the user's potential cognitive process in the cross-channel perceptual decision-making process, thereby determining the user's training effect.
在一种可能的实现方式中,为了防止用户快速猜测确定分类结果,进而导致训练结果出现偏差,在将分类结果和反应时间输入到预设的漂移扩散模型中进行处理之前,先剔除反应时间小于预设反应时间的数据。例如,剔除反应时间小于300ms的数据。In a possible implementation, in order to prevent users from quickly guessing and determining the classification results, which will lead to bias in the training results, before inputting the classification results and reaction times into the preset drift diffusion model for processing, first eliminate those with reaction times smaller than Preset reaction time data. For example, eliminate data with response time less than 300ms.
可选地,将反应时间小于预设反应时间的数据剔除后,还可根据剩下的所有反应时间计算标准差,再将反应时间超出预设标准差范围的数据剔除。例如,将反应时间超出正负2.5个标准差的数据剔除。此处仅为示例性说明,对此不做限定。Optionally, after eliminating data whose reaction time is less than the preset reaction time, the standard deviation can also be calculated based on all remaining reaction times, and then eliminating data whose reaction time exceeds the preset standard deviation range. For example, data whose reaction times exceed plus or minus 2.5 standard deviations are eliminated. This is only an illustrative description without limitation.
预设的漂移扩散模型应用到的函数如下:The functions applied to the preset drift-diffusion model are as follows:
(1) (1)
上述(1)式中,f(t)是有关于t的条件概率分布。根据贝叶斯定理可以将函数
Figure dest_path_image002
拆分为f(t)先验与f(t)似然两个部分。先验是指在不知道漂移扩散模型参数的情况下用户主观对概率分布的猜测,而似然是指在得到行为反应数据的概率分布的情况下,计算得到的漂移扩散模型参数。
In the above formula (1), f(t) is the conditional probability distribution about t. According to Bayes’ theorem, the function can be
Figure dest_path_image002
It is split into two parts: f(t) prior and f(t) likelihood. The prior refers to the user's subjective guess of the probability distribution without knowing the parameters of the drift-diffusion model, while the likelihood refers to the calculated parameters of the drift-diffusion model when the probability distribution of the behavioral response data is obtained.
因此,漂移扩散模型的重点在于求出似然条件下的参数数值。由于公式的复杂性无法直接求得参数数值,需要使用马尔可夫链蒙特卡罗算法(Markov Chain Monte Carlo,MCMC)。MCMC算法可以通过连续采样的方式获取函数特征,从而通过样本来推断总体的参数。因此,通过MCMC算法计算出贝叶斯中的似然部分,从而估计出参数分布。Therefore, the focus of the drift-diffusion model is to find the parameter values under likelihood conditions. Due to the complexity of the formula, the parameter values cannot be obtained directly, so the Markov chain Monte Carlo algorithm (Markov chain Monte Carlo algorithm) needs to be used. Chain Monte Carlo, MCMC). The MCMC algorithm can obtain function characteristics through continuous sampling, thereby inferring the parameters of the population through samples. Therefore, the likelihood part in Bayesian is calculated through the MCMC algorithm to estimate the parameter distribution.
示例性地,可以采用计算机编程语言(python)的HDDM工具箱,该工具箱提供了漂移扩散模型的分层贝叶斯参数估计,允许同时估计每个被试的漂移扩散模型参数,从而得到漂移率、决策边界以及非决策时间。For example, the HDDM toolbox of a computer programming language (python) can be used, which provides hierarchical Bayesian parameter estimation of the drift-diffusion model, allowing the drift-diffusion model parameters of each subject to be estimated simultaneously, thereby obtaining the drift rate, decision boundaries and non-decision time.
可选地,将分类结果和反应时间输入到预设的漂移扩散模型中进行处理,除了漂移率、决策边界以及非决策时间,还可得到相对起始点、相对起点的训练间变异、漂移率的训练间变异、非决策时间的训练间变异等参数。Optionally, input the classification results and reaction time into the preset drift diffusion model for processing. In addition to the drift rate, decision boundary and non-decision time, the relative starting point, the inter-training variation of the relative starting point, and the drift rate can also be obtained. Parameters such as inter-training variation and inter-training variation of non-decision time.
其中,相对起始点用于描述对反应选择的起始偏好。相对起点的训练间变异表现为平均相对起始点的均匀分布的范围,用于描述特定训练的实际起始点的分布。漂移率的训练间变异表现为正太分布的标准差,均值为漂移率,用于描述特定训练的实际漂移率分布。非决策时间的训练间变异表现为平均非决策时间的均匀分布的范围,用于描述在训练中实际非决策时间的分布。Among them, the relative starting point is used to describe the starting preference for response selection. Inter-session variation in relative starting points is expressed as a uniformly distributed range of mean relative starting points, describing the distribution of actual starting points for a particular training session. The inter-training variation in drift rate is expressed as the standard deviation of a normal distribution, and the mean is the drift rate, which is used to describe the actual drift rate distribution for a specific training. Inter-training variation in non-decision time is represented by a uniformly distributed range of mean non-decision time and is used to describe the distribution of actual non-decision time in training.
S205:根据漂移率、决策边界以及非决策时间,评估用户的知觉决策能力。S205: Evaluate the user's perceptual decision-making ability based on the drift rate, decision boundary and non-decision time.
示例性地,漂移率、决策边界以及非决策时间等参数各自对应不同的指标范围。例如,漂移率对应的指标范围可以为大于-5且小于5,决策边界对应的指标范围可以为大于0.5且小于2,非决策时间对应的指标范围可以为大于0.1且小于0.5。For example, parameters such as drift rate, decision boundary, and non-decision time each correspond to different indicator ranges. For example, the indicator range corresponding to the drift rate can be greater than -5 and less than 5, the indicator range corresponding to the decision boundary can be greater than 0.5 and less than 2, and the indicator range corresponding to the non-decision time can be greater than 0.1 and less than 0.5.
示例性地,若用户的漂移率、决策边界以及非决策时间均在各自对应的指标范围内,则评估用户的知觉决策能力强。若用户的漂移率、决策边界以及非决策时间有两个在各自对应的指标范围内,则评估用户的知觉决策能力中等。若用户的漂移率、决策边界以及非决策时间有一个在对应的指标范围内,或用户的漂移率、决策边界以及非决策时间均未在对应的指标范围内,则评估用户的知觉决策能力差。此处仅为示例性说明,对此不做限定。For example, if the user's drift rate, decision boundary, and non-decision time are all within the respective corresponding indicator ranges, the user's perceptual decision-making ability is evaluated to be strong. If two of the user's drift rate, decision boundary, and non-decision time are within their respective corresponding indicator ranges, the user's perceived decision-making ability is assessed to be moderate. If any of the user's drift rate, decision boundary, and non-decision time is within the corresponding indicator range, or if the user's drift rate, decision boundary, and non-decision time are not within the corresponding indicator range, the user's perceived decision-making ability is assessed to be poor. . This is only an illustrative description without limitation.
本实施方式中,通过预设的漂移扩散模型对用户的分类结果和反应时间进行处理,得到漂移率、决策边界以及非决策时间。通过漂移率、决策边界以及非决策时间等参数,可以准确反应出用户在进行跨通道知觉决策过程中的潜在认知过程,再对漂移率、决策边界以及非决策时间进行分析,可以准确地评估用户的知觉决策能力。In this implementation, the user's classification results and reaction time are processed through a preset drift diffusion model to obtain the drift rate, decision boundary and non-decision time. Through parameters such as drift rate, decision boundary and non-decision time, the user's potential cognitive process in the process of cross-channel perception decision-making can be accurately reflected. Then the drift rate, decision boundary and non-decision time can be analyzed accurately to accurately evaluate The user’s perceptual decision-making ability.
请参见图6,图6是本申请再一示例性实施例示出的一种训练方法的具体流程图,如图6所示的训练方法可包括:S301~S306。值得说明的是,本实施例中的S301~S305,与图5对应的实施例中的S201~S205完全相同,具体参考图5对应的实施例中的S201~S205的描述,本实施例中不再赘述。S306具体如下:Please refer to Figure 6. Figure 6 is a specific flow chart of a training method according to yet another exemplary embodiment of the present application. The training method shown in Figure 6 may include: S301~S306. It is worth noting that S301 ~ S305 in this embodiment are exactly the same as S201 ~ S205 in the embodiment corresponding to Figure 5. For details, refer to the description of S201 ~ S205 in the embodiment corresponding to Figure 5. There are no differences in this embodiment. Again. S306 details are as follows:
S306:根据用户的知觉决策能力,确定用户的健康状态。S306: Determine the user's health status based on the user's perceptual decision-making ability.
示例性地,一些病症会降低用户的知觉决策能力。例如,对于老年人老说,阿尔茨海默病会降低其的知觉决策能力。获取健康用户的知觉决策能力,以健康用户的知觉决策能力为基准。将本次训练过程中得到的用户的知觉决策能力,与健康用户的知觉决策能力进行比较,根据比较结果确定本次训练的用户的健康状态。For example, some conditions can reduce a user's perceptual decision-making ability. For example, in older adults, Alzheimer's disease can reduce their perceptual decision-making abilities. Obtain the perceptual decision-making ability of healthy users and use the perceptual decision-making ability of healthy users as the benchmark. Compare the user's perceptual decision-making ability obtained during this training with the perceptual decision-making ability of healthy users, and determine the health status of the user in this training based on the comparison results.
例如,健康用户的知觉决策能力强,本次训练过程中得到的用户的知觉决策能力差,确定本次训练的用户的健康状态为非健康状态。具体地,可确定本次训练的用户为阿尔茨海默病患者。此处仅为示例性说明,对此不做限定。For example, if a healthy user has strong perceptual decision-making ability, but the user's perceptual decision-making ability obtained during this training process is poor, the health state of the user in this training is determined to be an unhealthy state. Specifically, it can be determined that the user of this training is a patient with Alzheimer's disease. This is only an illustrative description without limitation.
本实施方式中,通过比对本次训练的用户的知觉决策能力,和健康用户的知觉决策能力,可以准确地确定用户的健康状态。例如,有利于准确、及时地发现阿尔茨海默病患者,以便尽早对阿尔茨海默病患者进行治疗。In this embodiment, by comparing the user's perceptual decision-making ability in this training with the perceptual decision-making ability of a healthy user, the user's health status can be accurately determined. For example, it is beneficial to accurately and timely detect Alzheimer's disease patients so that Alzheimer's disease patients can be treated as early as possible.
可选地,在一种可能的实现方式中,在随机展示知觉决策任务之前,本申请提供的训练方法还可包括:获取M个预设图片;调整每个预设图片的基本属性,得到M个第一图片;根据M个第一图片构建视觉分类任务。Optionally, in a possible implementation, before randomly displaying the perceptual decision-making task, the training method provided by this application may also include: obtaining M preset pictures; adjusting the basic attributes of each preset picture to obtain M first pictures; construct a visual classification task based on M first pictures.
示例性地,预设图片指原始的第一图片。基本属性可以包括图片的空间频率、对比度、亮度、像素、尺寸、清晰度、格式等。例如,获取若干个预设图片,其中一半为包含面孔的图片,另一半为包含汽车的图片。将这些图片的空间频率、对比度、亮度以及像素调整一致。例如,可将像素均调整为670 × 670 像素。For example, the default picture refers to the original first picture. Basic attributes can include the spatial frequency, contrast, brightness, pixels, size, clarity, format, etc. of the image. For example, get several preset images, half of which contain faces and the other half of which contain cars. Adjust the spatial frequency, contrast, brightness and pixels of these images to be consistent. For example, you can adjust the pixels to 670 × 670 pixels.
将空间频率、对比度、亮度以及像素调整一致后,采用预设软件(如Matlab软件)通过信噪比调整每个图片的清晰度。例如,通过信噪比将每个图片的清晰度分别调整到30%、32.5%、35%、37.5%、40%、42.5%、45%、50%的8个不同的水平。After adjusting the spatial frequency, contrast, brightness and pixels to be consistent, use preset software (such as Matlab software) to adjust the clarity of each picture through the signal-to-noise ratio. For example, the clarity of each picture is adjusted to 8 different levels of 30%, 32.5%, 35%, 37.5%, 40%, 42.5%, 45%, and 50% through the signal-to-noise ratio.
经过上述的调整,得到M个第一图片,例如得到240个第一图片。根据每个第一图片具体包含的图片内容,为每个第一图片设置正确的选项,如该第一图片对应的正确选择为点击并排显示两个选项中左边的选项,或该第一图片对应的正确选择为点击并排显示两个选项中右边的选项,或第一图片对应的正确选择为点击鼠标左键,或第一图片对应的正确选择为点击鼠标右键等。根据各个第一图片,以及各个第一图片对应的正确的选项,构建得到视觉分类任务。After the above adjustment, M first pictures are obtained, for example, 240 first pictures are obtained. According to the specific picture content contained in each first picture, set the correct option for each first picture. For example, the correct choice corresponding to the first picture is to click the option on the left of the two options to be displayed side by side, or the first picture corresponds to The correct choice is to click the right option of the two options displayed side by side, or the correct choice corresponding to the first picture is to click the left mouse button, or the correct choice corresponding to the first picture is to click the right mouse button, etc. According to each first picture and the correct option corresponding to each first picture, a visual classification task is constructed.
本实施方式中,得到的各个第一图片,都是调整过基本属性的,有效避免了由于各个图片基本属性差异带来的训练偏差,保证各个图片的基本属性不会对用户的选择带来影响,从而提高了训练结果的准确性。In this implementation, each first picture obtained has its basic attributes adjusted, which effectively avoids training bias caused by differences in the basic attributes of each picture and ensures that the basic attributes of each picture will not affect the user's choice. , thus improving the accuracy of training results.
可选地,在一种可能的实现方式中,在随机展示知觉决策任务之前,本申请提供的训练方法还可包括:获取N个预设声音;调整每个预设声音的声音属性,得到N个第一声音;根据N个第一声音构建听觉分类任务。Optionally, in a possible implementation, before randomly displaying the perceptual decision-making task, the training method provided by this application may also include: obtaining N preset sounds; adjusting the sound attributes of each preset sound to obtain N First sounds; construct an auditory classification task based on N first sounds.
示例性地,预设声音指原始的第一声音。声音属性可以包括声音的频率、音调、响度以及音色等。例如,获取若干个预设声音,其中一半为人物声音,另一半为汽车声音。将这些声音的响度和频率调整一致。例如,采用预设软件(如Matlab软件)对这些声音进行归一化处理,处理后的声音响度和频率调整一致。再使用语音合成软件将这些处理后的声音镶嵌在不同响度的白噪音中,得到不同信噪比的第一声音。Illustratively, the preset sound refers to the original first sound. Sound attributes can include frequency, pitch, loudness, timbre, etc. of the sound. For example, get several preset sounds, half of which are human sounds and the other half are car sounds. Adjust the loudness and frequency of these sounds to be consistent. For example, preset software (such as Matlab software) is used to normalize these sounds, and the loudness and frequency of the processed sounds are adjusted to be consistent. Then use speech synthesis software to embed these processed sounds into white noise of different loudness to obtain the first sounds with different signal-to-noise ratios.
例如,可再将处理后的声音的响度降至50%,并使用语音合成软件将再次调整响度后的声音分别镶嵌在8个不同响度的白噪音中,得到信噪比分别为12.5%、25%、37.5%、50%、62.5%、75%、87.5%以及100%的多个第一声音。这些第一声音的响度一致,例如,这些第一声音的响度均为60dB。For example, the loudness of the processed sound can be reduced to 50%, and speech synthesis software can be used to embed the loudness-adjusted sounds into eight white noises of different loudnesses, resulting in signal-to-noise ratios of 12.5% and 25 respectively. %, 37.5%, 50%, 62.5%, 75%, 87.5% and 100% of multiple first voices. The loudness of these first sounds is consistent, for example, the loudness of these first sounds is 60dB.
通过信噪比调整每个图片的清晰度。例如,通过信噪比将每个图片的清晰度分别调整到30%、32.5%、35%、37.5%、40%、42.5%、45%、50%的8个不同的水平。Adjust the sharpness of each picture via signal-to-noise ratio. For example, the clarity of each picture is adjusted to 8 different levels of 30%, 32.5%, 35%, 37.5%, 40%, 42.5%, 45%, and 50% through the signal-to-noise ratio.
经过上述的调整,得到N个第一声音,例如得到240个第一声音。根据每个第一声音具体包含的声音内容,为每个第一声音设置正确的选项,如该第一声音对应的正确选择为点击并排显示两个选项中左边的选项,或该第一声音对应的正确选择为点击并排显示两个选项中右边的选项,或第一声音对应的正确选择为点击鼠标左键,或第一声音对应的正确选择为点击鼠标右键等。根据各个第一声音,以及各个第一声音对应的正确的选项,构建得到听觉分类任务。After the above adjustment, N first sounds are obtained, for example, 240 first sounds are obtained. According to the specific sound content of each first sound, the correct option is set for each first sound. For example, the correct choice for the first sound is to click on the left option of the two options to display side by side, or for the first sound to The correct choice is to click the right option of the two options displayed side by side, or the correct choice for the first sound is to click the left mouse button, or the correct choice for the first sound is to click the right mouse button, etc. According to each first sound and the correct option corresponding to each first sound, an auditory classification task is constructed.
本实施方式中,得到的各个第一声音,都是调整过声音属性的,有效避免了由于各个声音属性差异带来的训练偏差,保证各个声音的声音属性不会对用户的选择带来影响,从而提高了训练结果的准确性。In this implementation, each first sound obtained has adjusted sound attributes, which effectively avoids training deviations caused by differences in the attributes of each sound and ensures that the sound attributes of each sound will not affect the user's choice. This improves the accuracy of training results.
可选地,在一种可能的实现方式中,在随机展示知觉决策任务之前,本申请提供的训练方法还可包括:在M个第一图片中确定L个第二图片;在N个第一声音中确定L个第二声音;对L个第二图片和L个第二声音进行配对,得到L个视听刺激对;根据L个视听刺激对构建视听觉分类任务。Optionally, in a possible implementation, before randomly presenting the perceptual decision-making task, the training method provided by this application may also include: determining L second pictures among the M first pictures; Determine L second sounds among the sounds; pair L second pictures and L second sounds to obtain L audio-visual stimulus pairs; construct an audio-visual classification task based on the L audio-visual stimulus pairs.
示例性地,视听刺激对中的第二图片可以是从第一图片中挑选出来的,视听刺激对中的第二声音可以是从第一声音中挑选出来的。例如,在M个第一图片中选取L个第一图片,将这L个第一图片确定为L个第二图片。在N个第一声音选取L个第一声音,将这L个第一声音确定为L个第二声音。For example, the second picture in the audio-visual stimulus pair may be selected from the first picture, and the second sound in the audio-visual stimulus pair may be selected from the first sound. For example, L first pictures are selected from M first pictures, and these L first pictures are determined as L second pictures. Select L first sounds from the N first sounds, and determine these L first sounds as L second sounds.
可以理解的是,由于第二声音是第二图片中的目标对应的声音,为了提升构建视听觉分类任务的速度,在确定第二图片时,选取有该第二图片中的目标对应的声音的图片。例如,在M个第一图片中选中某个第一图片为包含汽车的图片,N个第一声音中刚好有该汽车对应的声音,将选中的该第一图片确定为第二图片,将N个第一声音该汽车对应的声音确定为第二声音。此处仅为示例性说明,对此不做限定。It can be understood that since the second sound is the sound corresponding to the target in the second picture, in order to improve the speed of constructing the visual and auditory classification task, when determining the second picture, select the sound corresponding to the target in the second picture. picture. For example, a certain first picture is selected as a picture containing a car among M first pictures, and the sound corresponding to the car happens to be among the N first sounds. The selected first picture is determined as the second picture, and the N first pictures are selected as the second pictures. The sound corresponding to the first sound of the car is determined as the second sound. This is only an illustrative description without limitation.
对选中的L个第二图片和L个第二声音进行配对,得到L个视听刺激对。根据每个视听刺激对,为每个视听刺激对设置正确的选项,如该视听刺激对的正确选择为点击并排显示两个选项中左边的选项,或该视听刺激对的正确选择为点击并排显示两个选项中右边的选项,或视听刺激对的正确选择为点击鼠标左键,或视听刺激对的正确选择为点击鼠标右键等。根据各个视听刺激对,以及各个视听刺激对的正确的选项,构建得到视听觉分类任务。Pair the selected L second pictures and L second sounds to obtain L audio-visual stimulus pairs. According to each audio-visual stimulus pair, set the correct option for each audio-visual stimulus pair. For example, the correct choice for the audio-visual stimulus pair is to click the left option of the two options to display side by side, or the correct choice for the audio-visual stimulus pair is to click to display side by side. The right option among the two options, or the correct choice of the audio-visual stimulus pair is to click the left mouse button, or the correct choice of the audio-visual stimulus pair is to click the right mouse button, etc. Based on each audio-visual stimulus pair and the correct option for each audio-visual stimulus pair, an audio-visual classification task is constructed.
本实施方式中,各个视听刺激对中的第二图片和第二声音,都是在第一图片和第一声音中挑选的。由于第一图片的基本属性和第一声音的声音属性都是经过调整的,相当于各个视听刺激对中的第二图片的基本属性和第二声音的声音属性都是经过调整的。有效避免了由于各个图片的基本属性和声音属性差异带来的训练偏差,保证各个图片的基本属性和声音属性不会对用户的选择带来影响,从而提高了训练结果的准确性。In this embodiment, the second picture and the second sound in each audio-visual stimulus pair are selected from the first picture and the first sound. Since the basic attributes of the first picture and the sound attributes of the first sound have been adjusted, it is equivalent to the basic attributes of the second picture and the sound attributes of the second sound in each audio-visual stimulus pair being adjusted. It effectively avoids training bias caused by differences in the basic attributes and sound attributes of each picture, ensuring that the basic attributes and sound attributes of each picture will not affect the user's choice, thus improving the accuracy of the training results.
可选地,在一种可能的实现方式中,为了提高训练结果的准确性,在正式训练前,还可包括预训练。具体地,本申请提供的训练方法还可包括:S401~S405。Optionally, in a possible implementation, in order to improve the accuracy of the training results, pre-training may also be included before formal training. Specifically, the training method provided by this application may also include: S401~S405.
S401:确定每个第一图片和每个第一声音分别对应的刺激强度。S401: Determine the stimulation intensity corresponding to each first picture and each first sound.
S402:在M个第一图片中选取刺激强度为第一刺激强度的图片和刺激强度为第二刺激强度的图片。S402: Select the picture whose stimulation intensity is the first stimulation intensity and the picture whose stimulation intensity is the second stimulation intensity among the M first pictures.
S403:在N个第一声音中选取刺激强度为第一刺激强度的声音和刺激强度为第二刺激强度的声音。S403: Select the sound whose stimulation intensity is the first stimulation intensity and the sound whose stimulation intensity is the second stimulation intensity from the N first sounds.
刺激强度用于反映每个第一图片和每个第一声音被分类时各自对应的正确率。The stimulus intensity is used to reflect the corresponding accuracy of each first picture and each first sound when they are classified.
示例性地,在训练设备的显示界面中展示M个第一图片,用户对每个第一图片做出选择操作,将每个第一图片对应的选择操作与该第一图片对应的正确选择进行比较。每个正确选择记一分,未选择或选择错不计分,根据用户本次的所有选择操作可以得到一个分数,计算该分数占总分(所有第一图片均选择正确时对应的分数)的比例,得到用户本次预训练的正确率。For example, M first pictures are displayed in the display interface of the training device, the user makes a selection operation on each first picture, and the selection operation corresponding to each first picture is performed with the correct selection corresponding to the first picture. Compare. One point is scored for each correct choice. No points are scored for no selection or wrong selection. A score can be obtained based on all the selection operations of the user this time, and the proportion of this score to the total score (the corresponding score when all the first pictures are selected correctly) is calculated. , to obtain the accuracy of the user's pre-training.
根据该正确率,确定用户选择正确的第一图片的刺激强度。当正确率为第一阈值时,用户选择正确的第一图片的刺激强度为第一刺激强度;当正确率为第二阈值时,用户选择正确的第一图片的刺激强度为第二刺激强度。其中,第一阈值大于第二阈值,第一刺激强度高于第二刺激强度。例如,第一阈值为90%,第二阈值为70%,第一刺激强度为高强度,第二刺激强度为低强度。Based on the accuracy rate, the stimulation intensity for the user to select the correct first picture is determined. When the accuracy rate is the first threshold, the user selects the stimulation intensity of the correct first picture as the first stimulation intensity; when the accuracy rate is the second threshold, the user selects the stimulation intensity of the correct first picture as the second stimulation intensity. Wherein, the first threshold is greater than the second threshold, and the first stimulation intensity is higher than the second stimulation intensity. For example, the first threshold is 90%, the second threshold is 70%, the first stimulation intensity is high intensity, and the second stimulation intensity is low intensity.
示例性地,本次预训练的正确率为90%,本次用户选择正确的第一图片的刺激强度为第一刺激强度,即高强度。或,本次预训练的正确率为70%,本次用户选择正确的第一图片的刺激强度为第二刺激强度,即低强度。此处仅为示例性说明,对此不做限定。For example, the accuracy rate of this pre-training is 90%, and this time the user selects the correct stimulation intensity of the first picture as the first stimulation intensity, that is, high intensity. Or, the accuracy rate of this pre-training is 70%, and this time the user selects the correct stimulation intensity of the first picture as the second stimulation intensity, that is, low intensity. This is only an illustrative description without limitation.
示例性地,在训练设备的显示界面中展示N个第一声音,用户对每个第一声音做出选择操作,将每个第一声音对应的选择操作与该第一声音对应的正确选择进行比较。每个正确选择记一分,未选择或选择错不计分,根据用户本次的所有选择操作可以得到一个分数,计算该分数占总分(所有第一声音均选择正确时对应的分数)的比例,得到用户本次预训练的正确率。For example, N first sounds are displayed in the display interface of the training device, the user makes a selection operation for each first sound, and the selection operation corresponding to each first sound is performed with the correct selection corresponding to the first sound. Compare. One point is scored for each correct choice. No points are scored for no choice or wrong choice. A score can be obtained based on all the selection operations of the user this time, and the proportion of this score to the total score (the corresponding score when all first voice selections are correct) is calculated. , to obtain the accuracy of the user's pre-training.
根据该正确率,确定用户选择正确的第一声音的刺激强度。当正确率为第一阈值时,用户选择正确的第一声音的刺激强度为第一刺激强度;当正确率为第二阈值时,用户选择正确的第一声音的刺激强度为第二刺激强度。其中,第一阈值大于第二阈值,第一刺激强度高于第二刺激强度。例如,第一阈值为90%,第二阈值为70%,第一刺激强度为高强度,第二刺激强度为低强度。Based on the accuracy rate, the stimulation intensity of the first sound selected correctly by the user is determined. When the accuracy rate is the first threshold, the user selects the correct stimulation intensity of the first sound as the first stimulation intensity; when the accuracy rate is the second threshold, the user selects the correct stimulation intensity of the first sound as the second stimulation intensity. Wherein, the first threshold is greater than the second threshold, and the first stimulation intensity is higher than the second stimulation intensity. For example, the first threshold is 90%, the second threshold is 70%, the first stimulation intensity is high intensity, and the second stimulation intensity is low intensity.
示例性地,本次预训练的正确率为90%,本次用户选择正确的第一声音的刺激强度为第一刺激强度,即高强度。或,本次预训练的正确率为70%,本次用户选择正确的第一声音的刺激强度为第二刺激强度,即低强度。此处仅为示例性说明,对此不做限定。For example, the accuracy rate of this pre-training is 90%, and this time the user selects the correct stimulation intensity of the first sound as the first stimulation intensity, that is, high intensity. Or, the accuracy rate of this pre-training is 70%, and this time the user selects the correct stimulation intensity of the first sound as the second stimulation intensity, that is, low intensity. This is only an illustrative description without limitation.
S404:根据第一刺激强度的图片和第一刺激强度的声音,构建第一刺激强度的知觉决策任务。S404: Construct a perceptual decision-making task of the first stimulus intensity based on the picture of the first stimulus intensity and the sound of the first stimulus intensity.
第一刺激强度的知觉决策任务包括,第一刺激强度的视觉分类任务、第一刺激强度的听觉分类任务以及第一刺激强度的视听觉分类任务。The perceptual decision-making tasks of the first stimulus intensity include the visual classification task of the first stimulus intensity, the auditory classification task of the first stimulus intensity, and the visual and auditory classification task of the first stimulus intensity.
可以理解的是,构建第一刺激强度的视觉分类任务、第一刺激强度的听觉分类任务以及第一刺激强度的视听觉分类任务的过程,与上述构建视觉分类任务、听觉分类任务以及视听觉分类任务的过程类似。It can be understood that the process of constructing the visual classification task of the first stimulus intensity, the auditory classification task of the first stimulus intensity, and the visual and auditory classification task of the first stimulus intensity is different from the above-mentioned construction of the visual classification task, the auditory classification task and the visual and auditory classification. The task process is similar.
区别在于,上述构建视觉分类任务、听觉分类任务以及视听觉分类任务时,是基于第一图片、第一声音、第二图片以及第二声音构建得到的,本实施方式中是基于第一刺激强度的图片和第一刺激强度的声音构建得到的。具体过程可以参考上述构建视觉分类任务、听觉分类任务以及视听觉分类任务的过程,此处不再赘述。The difference is that the above-mentioned construction of the visual classification task, the auditory classification task and the visual and auditory classification task are based on the first picture, the first sound, the second picture and the second sound. In this embodiment, they are based on the first stimulation intensity. The picture and sound construct of the first stimulus intensity are obtained. The specific process can be referred to the above-mentioned process of constructing a visual classification task, an auditory classification task, and a visual and auditory classification task, and will not be described again here.
例如,构建得到的第一刺激强度的视觉分类任务中包含50个第一刺激强度的图片,第一刺激强度的听觉分类任务中包含50个第一刺激强度的声音,第一刺激强度的视听觉分类任务中包含50个视听刺激对。For example, the constructed visual classification task of the first stimulus intensity contains 50 pictures of the first stimulus intensity, the auditory classification task of the first stimulus intensity contains 50 sounds of the first stimulus intensity, and the visual and auditory classification task of the first stimulus intensity The classification task included 50 pairs of audio-visual stimuli.
S405:根据第二刺激强度的图片和第二刺激强度的声音,构建第二刺激强度的知觉决策任务。S405: Construct a perceptual decision-making task of the second stimulus intensity based on the picture of the second stimulus intensity and the sound of the second stimulus intensity.
第二刺激强度的知觉决策任务包括,第二刺激强度的视觉分类任务、第二刺激强度的听觉分类任务以及第二刺激强度的视听觉分类任务。The perceptual decision-making tasks of the second stimulus intensity include the visual classification task of the second stimulus intensity, the auditory classification task of the second stimulus intensity, and the visual and auditory classification task of the second stimulus intensity.
可以理解的是,构建第二刺激强度的视觉分类任务、第二刺激强度的听觉分类任务以及第二刺激强度的视听觉分类任务的过程,与上述构建视觉分类任务、听觉分类任务以及视听觉分类任务的过程类似。It can be understood that the process of constructing the visual classification task of the second stimulus intensity, the auditory classification task of the second stimulus intensity, and the visual and auditory classification task of the second stimulus intensity is different from the above-mentioned construction of the visual classification task, the auditory classification task, and the visual and auditory classification. The task process is similar.
区别在于,上述构建视觉分类任务、听觉分类任务以及视听觉分类任务时,是基于第一图片、第一声音、第二图片以及第二声音构建得到的,本实施方式中是基于第二刺激强度的图片和第二刺激强度的声音构建得到的。具体过程可以参考上述构建视觉分类任务、听觉分类任务以及视听觉分类任务的过程,此处不再赘述。The difference is that the above-mentioned construction of the visual classification task, the auditory classification task and the visual and auditory classification task are based on the first picture, the first sound, the second picture and the second sound. In this embodiment, they are based on the second stimulation intensity. The picture and the sound of the second stimulus intensity are constructed. The specific process can be referred to the above-mentioned process of constructing a visual classification task, an auditory classification task, and a visual and auditory classification task, and will not be described again here.
例如,构建得到的第二刺激强度的视觉分类任务中包含50个第二刺激强度的图片,第二刺激强度的听觉分类任务中包含50个第二刺激强度的声音,第二刺激强度的视听觉分类任务中包含50个视听刺激对。For example, the constructed visual classification task of the second stimulus intensity contains 50 pictures of the second stimulus intensity, the auditory classification task of the second stimulus intensity contains 50 sounds of the second stimulus intensity, and the visual and auditory task of the second stimulus intensity The classification task included 50 pairs of audio-visual stimuli.
示例性地,利用构建好的第一刺激强度的知觉决策任务和第二刺激强度的知觉决策任务对不同的用户(例如正常老年人和阿尔茨海默症患者)进行训练。采集用户在完成第一刺激强度的知觉决策任务和第二刺激强度的知觉决策任务时产生的行为反应数据;根据该行为反应数据确定目标训练结果,该目标训练结果包括第一刺激强度的知觉决策任务和第二刺激强度的知觉决策任务的正确率。For example, different users (such as normal elderly people and Alzheimer's disease patients) are trained using the constructed perceptual decision-making task of the first stimulation intensity and the perceptual decision-making task of the second stimulation intensity. Collect behavioral response data generated by the user when completing the perceptual decision-making task of the first stimulus intensity and the perceptual decision-making task of the second stimulus intensity; determine the target training results based on the behavioral response data, and the target training results include the perceptual decision-making of the first stimulus intensity Correctness on a perceptual decision-making task of task and secondary stimulus intensity.
本实施方式中,构建了不同刺激强度的知觉决策任务,可针对不同的用户采用不同刺激强度的知觉决策任务进行训练,能够有针对性地提高不同用户的知觉决策能力。In this embodiment, perceptual decision-making tasks with different stimulation intensities are constructed. Perceptual decision-making tasks with different stimulation intensities can be used for training for different users, and the perceptual decision-making abilities of different users can be improved in a targeted manner.
可选地,在一种可能的实现方式中,本申请提供的训练方法还可包括:根据训练结果调整知觉决策任务的难度,进而更有效地提高用户的知觉决策能力。Optionally, in a possible implementation, the training method provided by this application may also include: adjusting the difficulty of the perceptual decision-making task according to the training results, thereby more effectively improving the user's perceptual decision-making ability.
例如,当用户完成知觉决策任务的正确率大于预设正确率时,证明用户当前训练效果不错,可以增加知觉决策任务的难度。如可以逐渐增加知觉决策任务中图片和声音的种类,缩短相邻两个分类任务之间的预设时间间隔,增加每个分类任务对应的选项等。For example, when the user's accuracy in completing a perceptual decision-making task is greater than the preset accuracy rate, it proves that the user's current training effect is good and the difficulty of the perceptual decision-making task can be increased. For example, you can gradually increase the types of pictures and sounds in the perceptual decision-making task, shorten the preset time interval between two adjacent classification tasks, increase the options corresponding to each classification task, etc.
又例如,当用户完成知觉决策任务的正确率小于或等于预设正确率时,证明用户当前训练效果不好,可以降低知觉决策任务的难度。如可以减少知觉决策任务中图片和声音的种类,增加相邻两个分类任务之间的预设时间间隔等。For another example, when the user's accuracy in completing the perceptual decision-making task is less than or equal to the preset accuracy rate, it proves that the user's current training effect is not good, and the difficulty of the perceptual decision-making task can be reduced. For example, the types of pictures and sounds in perceptual decision-making tasks can be reduced, and the preset time interval between two adjacent classification tasks can be increased.
可选地,在一种可能的实现方式中,本申请提供的训练方法还可采用竞争模型研究跨通道对阿尔茨海默症患者的影响。与单一通道信息(如视觉信息或听觉信息)相比,当视觉信息和听觉信息同时出现时,个体的反应速度更快。这种现象被称为冗余信号效应(Redundant signal effect,RSE)。Optionally, in a possible implementation, the training method provided in this application can also use a competitive model to study the impact of cross-channels on patients with Alzheimer's disease. Individuals respond faster when visual and auditory information are presented simultaneously compared to single-channel information (such as visual information or auditory information). This phenomenon is called the redundant signal effect (RSE).
RSE可以通过统计促进(statistical facilitation)来解释,即个体对多感觉通道刺激(视听觉刺激)中先达到感觉阈限的单通道刺激(视觉刺激、听觉刺激)进行反应,导致双通道的信息即使在不发生整合的情况下也能加快对刺激的反应。通过本申请中多通道结合的方式进行训练,可以使个体在多感觉通道刺激(视听觉刺激)中先达到感觉阈限,由此提升个体的知觉决策能力。RSE can be explained by statistical facilitation, that is, individuals respond to the single-channel stimulus (visual stimulus, auditory stimulus) that reaches the sensory threshold first among multi-sensory channel stimuli (visual and auditory stimuli), resulting in dual-channel information even though Reactions to stimuli can be accelerated without integration occurring. Through the multi-channel combination training in this application, the individual can first reach the sensory threshold in multi-sensory channel stimulation (visual and auditory stimulation), thereby improving the individual's perceptual decision-making ability.
图7为本申请实施例提供的训练装置的示意图,如图7所示,本实施例提供的训练装置包括:Figure 7 is a schematic diagram of a training device provided by an embodiment of the present application. As shown in Figure 7, the training device provided by this embodiment includes:
展示单元510,用于随机展示知觉决策任务,所述知觉决策任务包括视觉分类任务、听觉分类任务以及视听觉分类任务,所述视觉分类任务包括对M个第一图片分别分类,所述听觉分类任务包括对N个第一声音分别分类,所述视听觉分类任务包括对L个视听刺激对分别分类,每个所述视听刺激对包括第二图片以及与所述第二图片中的目标对应的第二声音,其中,M≥2,N≥2,L≥2;The display unit 510 is used to randomly display perceptual decision-making tasks. The perceptual decision-making tasks include visual classification tasks, auditory classification tasks, and visual and auditory classification tasks. The visual classification tasks include classifying M first pictures respectively. The auditory classification tasks The task includes classifying N first sounds respectively, and the audio-visual classification task includes classifying L audio-visual stimulus pairs, each of the audio-visual stimulus pairs including a second picture and a target corresponding to the second picture. The second sound, where M≥2, N≥2, L≥2;
采集单元520,用于采集用户在完成所述知觉决策任务时产生的行为反应数据;The collection unit 520 is used to collect behavioral response data generated by the user when completing the perceptual decision-making task;
确定单元530,用于根据所述行为反应数据确定训练结果,所述训练结果包括所述用户完成所述知觉决策任务的正确率。The determining unit 530 is configured to determine training results according to the behavioral response data, where the training results include the accuracy of the user completing the perceptual decision-making task.
可选的,所述行为反应数据包括所述知觉决策任务中各个分类任务对应的分类结果和完成各个所述分类任务的反应时间。Optionally, the behavioral response data includes classification results corresponding to each classification task in the perceptual decision-making task and reaction times for completing each classification task.
可选的,该训练装置还包括:Optionally, the training device also includes:
评估单元,用于将所述分类结果和所述反应时间输入到预设的漂移扩散模型中进行处理,得到漂移率、决策边界以及非决策时间;根据所述漂移率、所述决策边界以及所述非决策时间,评估所述用户的知觉决策能力。An evaluation unit is used to input the classification result and the reaction time into a preset drift diffusion model for processing, and obtain the drift rate, decision boundary and non-decision time; according to the drift rate, the decision boundary and the The non-decision time is used to evaluate the user's perceived decision-making ability.
可选的,该训练装置还包括:Optionally, the training device also includes:
状态确定单元,用于根据所述用户的知觉决策能力,确定所述用户的健康状态。A state determination unit is configured to determine the health state of the user based on the user's perceptual decision-making ability.
可选的,该训练装置还包括:Optionally, the training device also includes:
第一构建单元,用于获取M个预设图片;调整每个所述预设图片的基本属性,得到M个第一图片;根据所述M个第一图片构建所述视觉分类任务。The first construction unit is used to obtain M preset pictures; adjust the basic attributes of each preset picture to obtain M first pictures; and construct the visual classification task based on the M first pictures.
可选的,该训练装置还包括:Optionally, the training device also includes:
第二构建单元,用于获取N个预设声音;调整每个所述预设声音的声音属性,得到N个第一声音;根据所述N个第一声音构建所述听觉分类任务。The second construction unit is used to obtain N preset sounds; adjust the sound attributes of each preset sound to obtain N first sounds; and construct the auditory classification task based on the N first sounds.
可选的,该训练装置还包括:Optionally, the training device also includes:
第三构建单元,用于在所述M个第一图片中确定L个第二图片;在所述N个第一声音中确定L个第二声音;对所述L个第二图片和所述L个第二声音进行配对,得到所述L个视听刺激对;根据所述L个视听刺激对构建所述视听觉分类任务。A third construction unit, configured to determine L second pictures among the M first pictures; determine L second sounds among the N first sounds; and combine the L second pictures and the L second sounds are paired to obtain the L audio-visual stimulus pairs; the audio-visual classification task is constructed based on the L audio-visual stimulus pairs.
可选的,该训练装置还包括:Optionally, the training device also includes:
第三构建单元,用于确定每个所述第一图片和每个所述第一声音分别对应的刺激强度,所述刺激强度用于反映每个所述第一图片和每个所述第一声音被分类时各自对应的正确率;在所述M个第一图片中选取刺激强度为第一刺激强度的图片和刺激强度为第二刺激强度的图片;在所述N个第一声音中选取刺激强度为第一刺激强度的声音和刺激强度为第二刺激强度的声音;根据所述第一刺激强度的图片和所述第一刺激强度的声音,构建第一刺激强度的知觉决策任务;根据所述第二刺激强度的图片和所述第二刺激强度的声音,构建第二刺激强度的知觉决策任务。A third construction unit is used to determine the stimulation intensity corresponding to each of the first pictures and each of the first sounds. The stimulation intensity is used to reflect each of the first pictures and each of the first sounds. The corresponding accuracy rate when the sounds are classified; select the picture with the stimulation intensity as the first stimulation intensity and the picture with the stimulation intensity as the second stimulation intensity among the M first pictures; select from the N first sounds The stimulation intensity is the sound of the first stimulation intensity and the stimulation intensity is the sound of the second stimulation intensity; according to the picture of the first stimulation intensity and the sound of the first stimulation intensity, a perceptual decision-making task of the first stimulation intensity is constructed; according to The picture of the second stimulation intensity and the sound of the second stimulation intensity construct a perceptual decision-making task of the second stimulation intensity.
请参见图8,图8是本申请另一实施例提供的训练设备的示意图。如图8所示,该实施例的训练设备6包括:处理器60、存储器61以及存储在所述存储器61中并可在所述处理器60上运行的计算机程序62。所述处理器60执行所述计算机程序62时实现上述各个训练方法实施例中的步骤,例如图1所示的S101至S103。或者,所述处理器60执行所述计算机程序62时实现上述各实施例中各单元的功能,例如图7所示单元510至530功能。Please refer to FIG. 8 , which is a schematic diagram of a training device provided by another embodiment of the present application. As shown in FIG. 8 , the training device 6 of this embodiment includes: a processor 60 , a memory 61 and a computer program 62 stored in the memory 61 and executable on the processor 60 . When the processor 60 executes the computer program 62, it implements the steps in each of the above training method embodiments, such as S101 to S103 shown in FIG. 1 . Alternatively, when the processor 60 executes the computer program 62, it implements the functions of each unit in the above embodiments, such as the functions of units 510 to 530 shown in FIG. 7 .
示例性地,所述计算机程序62可以被分割成一个或多个单元,所述一个或者多个单元被存储在所述存储器61中,并由所述处理器60执行,以完成本申请。所述一个或多个单元可以是能够完成特定功能的一系列计算机指令段,该指令段用于描述所述计算机程序62在所述训练设备6中的执行过程。例如,所述计算机程序62可以被分割为展示单元、采集单元以及确定单元,各单元具体功能如上所述。Exemplarily, the computer program 62 may be divided into one or more units, and the one or more units are stored in the memory 61 and executed by the processor 60 to complete the present application. The one or more units may be a series of computer instruction segments capable of completing specific functions. The instruction segments are used to describe the execution process of the computer program 62 in the training device 6 . For example, the computer program 62 can be divided into a display unit, a collection unit and a determination unit, and the specific functions of each unit are as described above.
所述训练设备可包括,但不仅限于,处理器60、存储器61。本领域技术人员可以理解,图8仅仅是训练设备6的示例,并不构成对设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述训练设备还可以包括输入输出设备、网络接入设备、总线等。The training device may include, but is not limited to, a processor 60 and a memory 61 . Those skilled in the art can understand that FIG. 8 is only an example of the training device 6 and does not constitute a limitation of the device. It may include more or fewer components than shown in the figure, or combine certain components, or different components, such as The training equipment may also include input and output equipment, network access equipment, buses, etc.
所称处理器60可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor 60 may be a central processing unit (Central Processing Unit). Processing Unit (CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor (DSP), Application Specific Integrated Circuit (Application Specific Integrated Circuit (ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
所述存储器61可以是所述训练设备的内部存储单元,例如设备的硬盘或内存。所述存储器61也可以是所述训练设备的外部存储终端,例如所述训练设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器61还可以既包括所述设备的内部存储单元也包括外部存储终端。所述存储器61用于存储所述计算机指令以及所述终端所需的其他程序和数据。所述存储器61还可以用于暂时地存储已经输出或者将要输出的数据。The memory 61 may be an internal storage unit of the training device, such as a hard disk or memory of the device. The memory 61 may also be an external storage terminal of the training device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), or a secure digital card equipped on the training device. Digital, SD) card, flash memory card (Flash Card) etc. Further, the memory 61 may also include both an internal storage unit of the device and an external storage terminal. The memory 61 is used to store the computer instructions and other programs and data required by the terminal. The memory 61 can also be used to temporarily store data that has been output or is to be output.
本申请实施例还提供了一种计算机存储介质,计算机存储介质可以是非易失性,也可以是易失性,该计算机存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述各个训练方法实施例中的步骤。Embodiments of the present application also provide a computer storage medium. The computer storage medium may be non-volatile or volatile. The computer storage medium stores a computer program. When the computer program is executed by a processor, the above trainings are implemented. Steps in method embodiments.
本申请还提供了一种计算机程序产品,当计算机程序产品在训练设备上运行时,使得该设备执行上述各个训练方法实施例中的步骤。This application also provides a computer program product. When the computer program product is run on a training device, it causes the device to perform the steps in each of the above training method embodiments.
本申请实施例还提供了一种芯片或者集成电路,该芯片或者集成电路包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有该芯片或者集成电路的训练设备执行上述各个训练方法实施例中的步骤。Embodiments of the present application also provide a chip or integrated circuit. The chip or integrated circuit includes: a processor, configured to call and run a computer program from a memory, so that the training equipment installed with the chip or integrated circuit performs each of the above trainings. Steps in method embodiments.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and simplicity of description, only the division of the above functional units and modules is used as an example. In actual applications, the above functions can be allocated to different functional units and modules according to needs. Module completion means dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above. Each functional unit and module in the embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit. The above-mentioned integrated unit can be hardware-based. It can also be implemented in the form of software functional units. In addition, the specific names of each functional unit and module are only for the convenience of distinguishing each other and are not used to limit the scope of protection of the present application. For the specific working processes of the units and modules in the above system, please refer to the corresponding processes in the foregoing method embodiments, and will not be described again here.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the above embodiments, each embodiment is described with its own emphasis. For parts that are not detailed or documented in a certain embodiment, please refer to the relevant descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art will appreciate that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented with electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each specific application, but such implementations should not be considered beyond the scope of this application.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神范围,均应包含在本申请的保护范围之内。The above-described embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that they can still implement the above-mentioned implementations. The technical solutions described in the examples are modified, or some of the technical features are equivalently replaced; and these modifications or substitutions do not cause the essence of the corresponding technical solutions to deviate from the spirit of the technical solutions in the embodiments of the application, and shall be included in this application. within the scope of protection applied for.

Claims (10)

  1. 一种训练方法,其特征在于,包括: A training method, characterized by including:
    随机展示知觉决策任务,所述知觉决策任务包括视觉分类任务、听觉分类任务以及视听觉分类任务,所述视觉分类任务包括对M个第一图片分别分类,所述听觉分类任务包括对N个第一声音分别分类,所述视听觉分类任务包括对L个视听刺激对分别分类,每个所述视听刺激对包括第二图片以及与所述第二图片中的目标对应的第二声音,其中,M≥2,N≥2,L≥2;Perceptual decision-making tasks are randomly displayed. The perceptual decision-making tasks include visual classification tasks, auditory classification tasks and audio-visual classification tasks. The visual classification tasks include classifying the M first pictures respectively. The auditory classification tasks include classifying the Nth pictures. A sound is classified separately, and the audio-visual classification task includes classifying L audio-visual stimulus pairs respectively, each of the audio-visual stimulus pairs includes a second picture and a second sound corresponding to the target in the second picture, wherein, M≥2, N≥2, L≥2;
    采集用户在完成所述知觉决策任务时产生的行为反应数据;Collect behavioral response data generated by users when completing the perceptual decision-making task;
    根据所述行为反应数据确定训练结果,所述训练结果包括所述用户完成所述知觉决策任务的正确率。Training results are determined based on the behavioral response data, and the training results include the accuracy of the user completing the perceptual decision-making task.
  2. 根据权利要求1所述的训练方法,其特征在于,所述行为反应数据包括所述知觉决策任务中各个分类任务对应的分类结果和完成各个所述分类任务的反应时间,所述训练方法还包括: The training method according to claim 1, wherein the behavioral response data includes classification results corresponding to each classification task in the perceptual decision-making task and reaction times for completing each classification task, and the training method further includes :
    将所述分类结果和所述反应时间输入到预设的漂移扩散模型中进行处理,得到漂移率、决策边界以及非决策时间;Input the classification results and the reaction time into the preset drift diffusion model for processing, and obtain the drift rate, decision boundary and non-decision time;
    根据所述漂移率、所述决策边界以及所述非决策时间,评估所述用户的知觉决策能力。The user's perceived decision-making ability is evaluated based on the drift rate, the decision boundary, and the non-decision time.
  3. 根据权利要求2所述的训练方法,其特征在于,所述根据所述漂移率、所述决策边界以及所述非决策时间,评估所述用户的知觉决策能力之后,所述训练方法还包括: The training method according to claim 2, wherein after evaluating the user's perceptual decision-making ability based on the drift rate, the decision boundary and the non-decision time, the training method further includes:
    根据所述用户的知觉决策能力,确定所述用户的健康状态。The user's health status is determined based on the user's perceptual decision-making ability.
  4. 根据权利要求1所述的训练方法,其特征在于,所述随机展示知觉决策任务之前,所述训练方法还包括: The training method according to claim 1, characterized in that before the random presentation of the perceptual decision-making task, the training method further includes:
    获取M个预设图片;Get M preset pictures;
    调整每个所述预设图片的基本属性,得到M个第一图片;Adjust the basic attributes of each preset picture to obtain M first pictures;
    根据所述M个第一图片构建所述视觉分类任务。The visual classification task is constructed based on the M first pictures.
  5. 根据权利要求4所述的训练方法,其特征在于,所述随机展示知觉决策任务之前,所述训练方法还包括: The training method according to claim 4, characterized in that before the random presentation of the perceptual decision-making task, the training method further includes:
    获取N个预设声音;Get N preset sounds;
    调整每个所述预设声音的声音属性,得到N个第一声音;Adjust the sound attributes of each preset sound to obtain N first sounds;
    根据所述N个第一声音构建所述听觉分类任务。The auditory classification task is constructed based on the N first sounds.
  6. 根据权利要求5所述的训练方法,其特征在于,所述随机展示知觉决策任务之前,所述训练方法还包括: The training method according to claim 5, characterized in that before the random presentation of the perceptual decision-making task, the training method further includes:
    在所述M个第一图片中确定L个第二图片;Determine L second pictures among the M first pictures;
    在所述N个第一声音中确定L个第二声音;Determine L second sounds among the N first sounds;
    对所述L个第二图片和所述L个第二声音进行配对,得到所述L个视听刺激对;Pair the L second pictures and the L second sounds to obtain the L audio-visual stimulus pairs;
    根据所述L个视听刺激对构建所述视听觉分类任务。The audio-visual classification task is constructed based on the L audio-visual stimulus pairs.
  7. 根据权利要求1至6任一项所述的训练方法,其特征在于,所述训练方法还包括: The training method according to any one of claims 1 to 6, characterized in that the training method further includes:
    确定每个所述第一图片和每个所述第一声音分别对应的刺激强度,所述刺激强度用于反映每个所述第一图片和每个所述第一声音被分类时各自对应的正确率;Determine the stimulation intensity corresponding to each first picture and each first sound. The stimulation intensity is used to reflect the corresponding stimulation intensity of each first picture and each first sound when they are classified. Correct rate;
    在所述M个第一图片中选取刺激强度为第一刺激强度的图片和刺激强度为第二刺激强度的图片;Select the picture whose stimulation intensity is the first stimulation intensity and the picture whose stimulation intensity is the second stimulation intensity among the M first pictures;
    在所述N个第一声音中选取刺激强度为第一刺激强度的声音和刺激强度为第二刺激强度的声音;Select the sound whose stimulation intensity is the first stimulation intensity and the sound whose stimulation intensity is the second stimulation intensity among the N first sounds;
    根据所述第一刺激强度的图片和所述第一刺激强度的声音,构建第一刺激强度的知觉决策任务;Construct a perceptual decision-making task of the first stimulation intensity based on the picture of the first stimulation intensity and the sound of the first stimulation intensity;
    根据所述第二刺激强度的图片和所述第二刺激强度的声音,构建第二刺激强度的知觉决策任务。Based on the picture of the second stimulation intensity and the sound of the second stimulation intensity, a perceptual decision-making task of the second stimulation intensity is constructed.
  8. 一种训练装置,其特征在于,包括: A training device, characterized in that it includes:
    展示单元,用于随机展示知觉决策任务,所述知觉决策任务包括视觉分类任务、听觉分类任务以及视听觉分类任务,所述视觉分类任务包括对M个第一图片分别分类,所述听觉分类任务包括对N个第一声音分别分类,所述视听觉分类任务包括对L个视听刺激对分别分类,每个所述视听刺激对包括第二图片以及与所述第二图片中的目标对应的第二声音,其中,M≥2,N≥2,L≥2;A display unit for randomly displaying perceptual decision-making tasks. The perceptual decision-making tasks include visual classification tasks, auditory classification tasks, and visual and auditory classification tasks. The visual classification tasks include classifying M first pictures respectively, and the auditory classification tasks Including classifying N first sounds respectively, the audio-visual classification task includes classifying L audio-visual stimulus pairs respectively, each of the audio-visual stimulus pairs includes a second picture and a third image corresponding to the target in the second picture. Two sounds, where M≥2, N≥2, L≥2;
    采集单元,用于采集用户在完成所述知觉决策任务时产生的行为反应数据;A collection unit, used to collect behavioral response data generated by the user when completing the perceptual decision-making task;
    确定单元,用于根据所述行为反应数据确定训练结果,所述训练结果包括所述用户完成所述知觉决策任务的正确率。A determining unit, configured to determine a training result based on the behavioral response data, where the training result includes the accuracy of the user completing the perceptual decision-making task.
  9. 一种训练设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的方法。 A training device, including a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that when the processor executes the computer program, it implements claims 1 to 1 The method described in any one of 7.
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的方法。A computer-readable storage medium stores a computer program, characterized in that when the computer program is executed by a processor, the method according to any one of claims 1 to 7 is implemented.
PCT/CN2022/138186 2022-06-13 2022-12-09 Training method, training apparatus, training device, and storage medium WO2023240951A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210661128.7 2022-06-13
CN202210661128.7A CN115171658A (en) 2022-06-13 2022-06-13 Training method, training device, training apparatus, and storage medium

Publications (1)

Publication Number Publication Date
WO2023240951A1 true WO2023240951A1 (en) 2023-12-21

Family

ID=83486133

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/138186 WO2023240951A1 (en) 2022-06-13 2022-12-09 Training method, training apparatus, training device, and storage medium

Country Status (2)

Country Link
CN (1) CN115171658A (en)
WO (1) WO2023240951A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171658A (en) * 2022-06-13 2022-10-11 深圳先进技术研究院 Training method, training device, training apparatus, and storage medium
CN115691545B (en) * 2022-12-30 2023-05-26 杭州南粟科技有限公司 Category perception training method and system based on VR game

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105266805A (en) * 2015-10-23 2016-01-27 华南理工大学 Visuoauditory brain-computer interface-based consciousness state detecting method
US20190159715A1 (en) * 2016-08-05 2019-05-30 The Regents Of The University Of California Methods of cognitive fitness detection and training and systems for practicing the same
CN110022768A (en) * 2016-08-26 2019-07-16 阿克利互动实验室公司 The cognition platform coupled with physiology component
CN110347242A (en) * 2019-05-29 2019-10-18 长春理工大学 Audio visual brain-computer interface spelling system and its method based on space and semantic congruence
CN110786825A (en) * 2019-09-30 2020-02-14 浙江凡聚科技有限公司 Spatial perception detuning training system based on virtual reality visual and auditory pathway
CN114201053A (en) * 2022-02-17 2022-03-18 北京智精灵科技有限公司 Cognition enhancement training method and system based on neural regulation
CN115171658A (en) * 2022-06-13 2022-10-11 深圳先进技术研究院 Training method, training device, training apparatus, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105266805A (en) * 2015-10-23 2016-01-27 华南理工大学 Visuoauditory brain-computer interface-based consciousness state detecting method
US20190159715A1 (en) * 2016-08-05 2019-05-30 The Regents Of The University Of California Methods of cognitive fitness detection and training and systems for practicing the same
CN110022768A (en) * 2016-08-26 2019-07-16 阿克利互动实验室公司 The cognition platform coupled with physiology component
CN110347242A (en) * 2019-05-29 2019-10-18 长春理工大学 Audio visual brain-computer interface spelling system and its method based on space and semantic congruence
CN110786825A (en) * 2019-09-30 2020-02-14 浙江凡聚科技有限公司 Spatial perception detuning training system based on virtual reality visual and auditory pathway
CN114201053A (en) * 2022-02-17 2022-03-18 北京智精灵科技有限公司 Cognition enhancement training method and system based on neural regulation
CN115171658A (en) * 2022-06-13 2022-10-11 深圳先进技术研究院 Training method, training device, training apparatus, and storage medium

Also Published As

Publication number Publication date
CN115171658A (en) 2022-10-11

Similar Documents

Publication Publication Date Title
Manning et al. Taking language samples home: Feasibility, reliability, and validity of child language samples conducted remotely with video chat versus in-person
Vargas-Cuentas et al. Developing an eye-tracking algorithm as a potential tool for early diagnosis of autism spectrum disorder in children
Hughes et al. Measuring listening effort expended by adolescents and young adults with unilateral or bilateral cochlear implants or normal hearing
Thomas-Stonell et al. Predicted and observed outcomes in preschool children following speech and language treatment: Parent and clinician perspectives
Nkyekyer et al. The cognitive and psychosocial effects of auditory training and hearing aids in adults with hearing loss
Horn et al. Development of visual attention skills in prelingually deaf children who use cochlear implants
Cream et al. Randomized controlled trial of video self-modeling following speech restructuring treatment for stuttering
WO2023240951A1 (en) Training method, training apparatus, training device, and storage medium
McNaney et al. Speeching: Mobile crowdsourced speech assessment to support self-monitoring and management for people with Parkinson's
De Graaff et al. Assessment of speech recognition abilities in quiet and in noise: A comparison between self-administered home testing and testing in the clinic for adult cochlear implant users
McAllister et al. Protocol for correcting residual errors with spectral, ultrasound, traditional speech therapy randomized controlled trial (C-RESULTS RCT)
Choi et al. Hearing and auditory processing abilities in primary school children with learning difficulties
Galazka et al. Facial speech processing in children with and without dyslexia
Ambrose et al. Assessing vocal development in infants and toddlers who are hard of hearing: A parent-report tool
Venail et al. Speech perception, real-ear measurements and self-perceived hearing impairment after remote and face-to-face programming of hearing aids: A randomized single-blind agreement study
Devesse et al. Speech intelligibility of virtual humans
Dechêne et al. Simulated in-home teletreatment for anomia
Gijbels et al. Audiovisual speech processing in relationship to phonological and vocabulary skills in first graders
Rousseau et al. Comparisons of audio and audiovisual measures of stuttering frequency and severity in preschool-age children
Natzke et al. Measuring speech production development in children with cerebral palsy between 6 and 8 years of age: Relationships among measures
Wilson et al. A preliminary investigation of sound-field amplification as an inclusive classroom adjustment for children with and without Autism Spectrum Disorder
McAllister et al. Baseline stimulability predicts patterns of response to traditional and ultrasound biofeedback treatment for residual speech sound disorder
Banks et al. Sports related concussion impacts speech rate and muscle physiology
Rose et al. Visual attention and key word sign in children with autism spectrum disorder
Beadle et al. Visual speech improves older and younger adults’ response time and accuracy for speech comprehension in noise

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946617

Country of ref document: EP

Kind code of ref document: A1