[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2023240951A1 - Procédé d'apprentissage, appareil d'apprentissage, dispositif d'apprentissage et support de stockage - Google Patents

Procédé d'apprentissage, appareil d'apprentissage, dispositif d'apprentissage et support de stockage Download PDF

Info

Publication number
WO2023240951A1
WO2023240951A1 PCT/CN2022/138186 CN2022138186W WO2023240951A1 WO 2023240951 A1 WO2023240951 A1 WO 2023240951A1 CN 2022138186 W CN2022138186 W CN 2022138186W WO 2023240951 A1 WO2023240951 A1 WO 2023240951A1
Authority
WO
WIPO (PCT)
Prior art keywords
decision
visual
training
task
making
Prior art date
Application number
PCT/CN2022/138186
Other languages
English (en)
Chinese (zh)
Inventor
张志林
李胜楠
杨伟平
梁栋
吴景龙
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2023240951A1 publication Critical patent/WO2023240951A1/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/01Assessment or evaluation of speech recognition systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Definitions

  • the present application relates to the field of computer technology, and in particular, to a training method, device, equipment and storage medium.
  • AD Alzheimer's disease
  • Clinical manifestations include comprehensive dementia such as memory impairment, aphasia, apraxia, agnosia, impairment of visuospatial skills, executive dysfunction, and changes in personality and behavior.
  • Alzheimer's disease will reduce their perceptual decision-making ability, causing them to perform poorly on decision-making tasks that require the investment of perceptual abilities, attention resources, memory and other high-level cognitive abilities.
  • this application provides training methods, training devices, training equipment and storage media, which can effectively improve an individual's perceptual decision-making ability.
  • this application provides a training method, including: randomly displaying a perceptual decision-making task.
  • the perceptual decision-making task includes a visual classification task, an auditory classification task, and a visual and auditory classification task.
  • the visual classification task includes classifying M first pictures respectively.
  • the auditory classification task includes classifying N first sounds respectively
  • the audio-visual classification task includes classifying L audio-visual stimulus pairs, each of the audio-visual stimulus pairs includes a second picture and a target in the second picture The corresponding second sound, where M ⁇ 2, N ⁇ 2, L ⁇ 2; collect the behavioral response data generated by the user when completing the perceptual decision-making task; determine the training results based on the behavioral response data, and the training results include the user The accuracy of completing this perceptual decision-making task.
  • the behavioral response data includes classification results corresponding to each classification task in the perceptual decision-making task and reaction times for completing each classification task.
  • the training method also includes: inputting the classification results and reaction times into a preset drift Process in the diffusion model to obtain the drift rate, decision boundary and non-decision time; based on the drift rate, decision boundary and non-decision time, the user's perceptual decision-making ability is evaluated.
  • the training method also includes: determining the user's health status based on the user's perceptual decision-making ability.
  • the training method also includes: obtaining M preset pictures; adjusting the basic attributes of each preset picture to obtain M first pictures; and constructing the visual classification task based on the M first pictures. .
  • the training method further includes: obtaining N preset sounds; adjusting the sound attributes of each preset sound to obtain N first sounds; and constructing an auditory classification task based on the N first sounds.
  • the training method further includes: determining L second pictures among the M first pictures; determining L second sounds among the N first sounds; comparing the L second pictures and L Pair the second sounds to obtain L audio-visual stimulus pairs; construct an audio-visual classification task based on the L audio-visual stimulus pairs.
  • the training method further includes: determining the stimulation intensity corresponding to each of the first pictures and each of the first sounds, and the stimulation intensity is used to reflect each of the first pictures. and the corresponding accuracy rate when each first sound is classified;
  • a perceptual decision-making task of the second stimulation intensity is constructed.
  • this application provides a training device, including:
  • the display unit is used to randomly display perceptual decision-making tasks.
  • the perceptual decision-making tasks include visual classification tasks, auditory classification tasks and visual and auditory classification tasks.
  • the visual classification tasks include classifying the M first pictures respectively, and the auditory classification tasks include classifying the N first pictures. Sounds are classified separately.
  • the audio-visual classification task includes classifying L audio-visual stimulus pairs respectively. Each audio-visual stimulus pair includes a second picture and a second sound corresponding to the target in the second picture, where M ⁇ 2, N ⁇ 2 ,L ⁇ 2;
  • the collection unit is used to collect behavioral response data generated by users when completing perceptual decision-making tasks
  • the determination unit is used to determine the training results based on the behavioral response data, and the training results include the accuracy of the user completing the perceptual decision-making task.
  • the present application provides a training device, including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • a training device including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program, any one of the above aspects in the first aspect is implemented.
  • the present application provides a computer-readable storage medium that stores a computer program.
  • the computer program is executed by a processor, the training method described in any of the above-mentioned first aspects is implemented.
  • embodiments of the present application provide a computer program product.
  • the computer program product When the computer program product is run on a processor, it causes the processor to execute the training method described in any of the above-mentioned first aspects.
  • the training method provided by this application randomly displays perceptual decision-making tasks to users, and trains users based on the perceptual decision-making tasks.
  • the behavioral response data generated by the user when completing the perceptual decision-making task is collected.
  • the training results can be determined, such as determining the accuracy of the user in completing the perceptual decision-making task.
  • this perceptual decision-making task includes classification tasks on multiple channels of vision, hearing, and visual and auditory
  • using this perceptual decision-making task to train users can accelerate the user's information storage and encoding in the high-order cognitive process, and can improve The user's reaction speed, in turn, promotes the formation of perceptual decision-making, thereby effectively improving the individual's perceptual decision-making ability.
  • Figure 1 is a schematic flow chart of a training method provided by an exemplary embodiment of the present application.
  • Figure 2 is a first schematic diagram provided by an embodiment of the present application.
  • Figure 3 is a first sound diagram provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of an audio-visual stimulus pair provided by an embodiment of the present application.
  • Figure 5 is a specific flow chart of a training method according to another exemplary embodiment of the present application.
  • Figure 6 is a specific flow chart of a training method according to yet another exemplary embodiment of the present application.
  • Figure 7 is a schematic diagram of a training device provided by an embodiment of the present application.
  • Figure 8 is a schematic diagram of a training device provided by another embodiment of the present application.
  • AD Alzheimer's disease
  • Clinical manifestations include comprehensive dementia such as memory impairment, aphasia, apraxia, agnosia, impairment of visuospatial skills, executive dysfunction, and changes in personality and behavior.
  • Perceptual decision-making is a continuous hierarchical cognitive operation that converts sensory information into goal-oriented and responds, including encoding and accumulation of decision-making from sensory information (such as information generated by objective things that directly act on sensory organs) Information is applied to make decisions using decision rules, culminating in behavioral responses. For example, the user sees a picture, determines that the content in the picture is an animal, and selects the animal option among the preset options. This entire process is called perceptual decision-making.
  • Alzheimer's disease will reduce their perceptual decision-making ability, causing them to perform poorly on decision-making tasks that require the investment of perceptual abilities, attention resources, memory and other high-level cognitive abilities.
  • this application provides a training method, training device, training equipment and storage medium.
  • the users are trained based on the perceptual decision-making tasks.
  • the behavioral response data generated by the user when completing the perceptual decision-making task is collected.
  • the training results can be determined, such as determining the accuracy of the user in completing the perceptual decision-making task.
  • this perceptual decision-making task includes classification tasks on multiple channels of vision, hearing, and visual and auditory
  • using this perceptual decision-making task to train users can accelerate the user's information storage and encoding in the high-order cognitive process, and can improve The user's reaction speed, in turn, promotes the formation of perceptual decision-making, thereby effectively improving the individual's perceptual decision-making ability.
  • the embodiment of this application provides training software.
  • the training software can be installed in a training device, which can be a device that can display pictures and have audio playback functions, such as smartphones, tablets, desktop computers, laptops, robots, smart wearables and other devices.
  • the training software provided by this application can not only train users, but also test the user's perceptual decision-making ability before or after training.
  • Figure 1 is a schematic flow chart of a training method provided by an exemplary embodiment of the present application.
  • the training method as shown in Figure 1 may include: S101 ⁇ S103, specifically as follows:
  • Perceptual decision-making tasks include visual classification tasks, auditory classification tasks, and visual and auditory classification tasks.
  • the visual classification task includes classifying M first images respectively, M ⁇ 2.
  • M represents the number of the first picture.
  • M can be a positive integer greater than or equal to 2.
  • the first picture may be a picture containing any object.
  • the first picture may be a picture containing faces, a picture containing cars, a picture containing animals, a picture containing plants, a picture containing buildings, a picture containing food, or containing daily necessities. of images, images containing electronic devices, images containing musical instruments, etc. Different types of first pictures can be added according to actual training needs. This is only an illustrative description without limitation.
  • Figure 2 is a first schematic diagram provided by an embodiment of the present application. As shown in Figure 2, Figure 2 shows a first picture in the visual classification task, which is a picture containing a face.
  • the first picture may be obtained by taking a photo, may be collected from the Internet, may be obtained by painting, etc.
  • the auditory classification task involves classifying N first sounds respectively, N ⁇ 2.
  • N represents the number of first sounds, for example, N can be a positive integer greater than or equal to 2.
  • the first sound may be audio containing any sound.
  • the first sound may be audio containing the sound of a person, audio containing the sound of a car, audio containing the sound of an animal, audio containing the sound of an electronic device, audio containing the sound of an instrument, etc.
  • Different types of first sounds can be added according to actual training needs. This is only an exemplary description and is not limited.
  • Figure 3 is a first sound diagram provided by an embodiment of the present application. As shown in Figure 3, Figure 3 shows a first sound in the auditory classification task.
  • the first sound is an audio containing a character's voice, specifically an audio containing a little girl's voice.
  • the first sound may be obtained by recording, or may be collected from the Internet, etc.
  • the audio-visual classification task includes classifying L audio-visual stimulus pairs respectively, each audio-visual stimulus pair includes a second picture and a second sound corresponding to the target in the second picture, L ⁇ 2.
  • L represents the number of audio-visual stimulus pairs.
  • L can be a positive integer greater than or equal to 2.
  • the second picture may be a picture containing any object.
  • the second picture may be a picture containing faces, a picture containing cars, a picture containing animals, a picture containing musical instruments, etc.
  • the second sound may be audio containing human voices, audio containing car sounds, audio containing animal sounds, audio containing musical instrument sounds, etc.
  • a picture of a face and the audio of the sound corresponding to the face form an audio-visual stimulus pair
  • the picture of a car and the audio of the sound corresponding to the car form an audio-visual stimulus pair
  • the picture of an animal and the sound corresponding to the animal form an audio-visual stimulus pair.
  • the audio of the instrument forms an audio-visual stimulus pair
  • the audio containing the picture of the musical instrument and the audio corresponding to the sound of the instrument forms an audio-visual stimulus pair.
  • Figure 4 is a schematic diagram of an audio-visual stimulus pair provided by an embodiment of the present application.
  • Figure 4 shows an audio-visual stimulus pair in the audio-visual classification task.
  • the audio-visual stimulus pair includes a second picture and a second sound corresponding to the target in the second picture.
  • the second picture is a picture containing a car
  • the second sound is an audio containing a car sound, specifically an audio containing a car horn.
  • the second picture in the audio-visual stimulus pair can be selected from the first picture, or it can be re-photographed, or it can be collected from the Internet, or it can be obtained through painting.
  • the second sound in the audio-visual stimulus pair can be selected from the first sound, can be recorded, or can be collected from the Internet.
  • the perceptual decision-making task starts to be displayed randomly in the display interface of the training device.
  • users can click manually, operate remotely, or use voice control.
  • a gaze point is presented in the center of the display interface of the training device.
  • the presentation duration of the gaze point can be set by oneself, for example, it can be set to 2000 ms, and then the visual classification task, the auditory classification task, and the visual and auditory classification tasks are randomly displayed.
  • One way to display it can be to display one task and then another task until all tasks are displayed.
  • the visual classification task is shown first. After all the M first pictures in the visual classification task are shown, the auditory classification task is shown. After the N first sounds in the auditory classification task are shown, the visual and auditory tasks are shown again. Classification task until all L audio-visual stimulus pairs in the audio-visual classification task are displayed.
  • the display order can be a visual classification task, an auditory classification task, a visual and auditory classification task, or it can be a visual classification task, a visual and auditory classification task, an auditory classification task, or it can be a visual and auditory classification task, a visual classification task, an auditory classification task. Wait, there is no limit to this.
  • another display method may be to intersperse the visual classification task, the auditory classification task, and the audio-visual classification task, that is, the M first pictures, the N first sounds, and the L audio-visual stimulus pairs are interspersed until all The task is displayed. For example, first display several first pictures, then display several audio-visual stimulus pairs, then display several first sounds, then display several first pictures, then display several sounds, and so on until all tasks are displayed.
  • first display a first picture then display a first sound
  • display an audio-visual stimulus pair then display a first sound
  • display a first picture and so on.
  • S102 Collect the user's behavioral response data when completing the perceptual decision-making task.
  • the display interface of the training device in addition to displaying each perceptual decision-making task, options corresponding to each perceptual decision-making task are also displayed.
  • the user makes a choice for each classification task.
  • the data generated by these selection operations are behavioral response data, and the training device collects these behavioral response data.
  • the first picture in the visual classification task is displayed in the current display interface, and two options are displayed side by side below, above, right, or left of the first picture.
  • the correct choice is to click the left option among the two options displayed side by side;
  • the first picture is a picture containing a car, the correct choice is to click the right option among the two options displayed side by side. options.
  • the current training device plays the first sound in the auditory classification task, and the display interface displays two options side by side.
  • the first sound is a human voice
  • the correct choice is to click the left option to display the two options side by side
  • the first sound is the sound of a car
  • the correct choice is to click the right option to display the two options side by side.
  • the distance between the user and the training equipment can be set and adjusted by himself. For example, the user is 60 cm away from the display interface and speakers.
  • the training device records the choices made by the user for each classification task.
  • the user makes a selection operation for each classification task, which can be achieved through a mouse.
  • the correct choice for these classification tasks is to click the left mouse button. key.
  • the correct choice for these classification tasks is to click the right mouse button.
  • the training device plays the second sound corresponding to the second picture.
  • the second picture is a second picture containing a face
  • the second sound is a second sound containing a character's voice
  • the correct selection is to click the left mouse button.
  • the training device records the choices made by the user for each classification task.
  • two adjacent classification tasks can be displayed based on a preset time interval, and the display duration of each classification task is is the default duration.
  • the preset time interval between two adjacent classification tasks can be 1200 to 1500ms, and the display duration of each classification task can be 300ms.
  • the preset time interval between two adjacent pairs of audio-visual stimuli can be 1200-1500 ms, and the display duration of each pair of audio-visual stimuli can be 300-500 ms. This is only an illustrative description without limitation.
  • the training results include the accuracy of users completing perceptual decision-making tasks.
  • the behavioral response data is data generated by the user making a selection operation for each classification task.
  • the behavioral response data corresponding to each classification task is compared with the correct choice corresponding to the task, and the training result is determined based on the comparison result.
  • one point is scored for each correct choice, and no points are scored for no choice or wrong choice.
  • a score can be obtained based on the user's behavioral response data, and the proportion of this score to the total score (the corresponding score when all tasks are selected correctly) is calculated, and we get The accuracy of users completing perceptual decision-making tasks. This is only an illustrative description without limitation.
  • the perceptual decision-making task is randomly displayed to the user, and the user is trained based on the perceptual decision-making task.
  • the behavioral response data generated by the user when completing the perceptual decision-making task is collected. Based on the behavioral response data, the training results can be determined, such as determining the accuracy of the user in completing the perceptual decision-making task.
  • this perceptual decision-making task includes classification tasks on multiple channels of vision, hearing, and visual and auditory
  • using this perceptual decision-making task to train users can accelerate the user's information storage and encoding in the high-order cognitive process, and can improve The user's reaction speed, in turn, promotes the formation of perceptual decision-making, thereby effectively improving the individual's perceptual decision-making ability.
  • Figure 5 is a specific flow chart of a training method according to another exemplary embodiment of the present application.
  • the training method shown in Figure 5 may include: S201 ⁇ S205, specifically as follows:
  • S202 Collect the user's behavioral response data when completing the perceptual decision-making task.
  • S203 Determine training results based on behavioral response data.
  • S201 ⁇ S203 are exactly the same as S101 ⁇ S103 in the embodiment corresponding to FIG. 1 .
  • S101 ⁇ S103 in the embodiment corresponding to FIG. 1 , which will not be described again here.
  • Behavioral response data include the classification results corresponding to each classification task in the perceptual decision-making task and the reaction time to complete each classification task.
  • the classification results corresponding to each classification task are the selection operations made by the user. For example, the user clicks on the left option of two options displayed side by side, the user clicks on the right option of two options displayed side by side, the user clicks the left button of the mouse, and the user clicks the right button of the mouse.
  • the reaction time to complete each category task was determined by the time when each category task was initially presented and the time when the user made a selection. For example, for a certain classification task, the timing starts when the classification task is displayed and ends immediately after the user makes a selection. The recorded time is the reaction time corresponding to the classification task.
  • S204 Input the classification results and reaction time into the preset drift diffusion model for processing, and obtain the drift rate, decision boundary and non-decision time.
  • the preset drift-diffusion model simulates the decision-making process in the classification task.
  • Each choice of the user is represented as an upper boundary and a lower boundary.
  • the perceptual decision-making process continues to accumulate evidence over time until it reaches one of the two boundaries. , and then trigger corresponding behavioral responses.
  • the drift rate, decision boundary and non-decision time are different parameters obtained by processing the classification results and reaction time of the drift diffusion model. These different parameters respectively map the cognitive processing behind the behavior of the perceptual decision-making process. Specifically, the drift rate is used to describe the speed at which information is accumulated, the decision boundary is used to describe the response boundary that needs to be reached before a response is made, and the non-decision time is used to describe the time of sensory encoding and motor response.
  • the specific parameters of the drift-diffusion model can be calculated under different situations to reflect the user's potential cognitive process in the cross-channel perceptual decision-making process, thereby determining the user's training effect.
  • the standard deviation can also be calculated based on all remaining reaction times, and then eliminating data whose reaction time exceeds the preset standard deviation range. For example, data whose reaction times exceed plus or minus 2.5 standard deviations are eliminated. This is only an illustrative description without limitation.
  • f(t) is the conditional probability distribution about t.
  • the function can be It is split into two parts: f(t) prior and f(t) likelihood.
  • the prior refers to the user's subjective guess of the probability distribution without knowing the parameters of the drift-diffusion model, while the likelihood refers to the calculated parameters of the drift-diffusion model when the probability distribution of the behavioral response data is obtained.
  • the focus of the drift-diffusion model is to find the parameter values under likelihood conditions. Due to the complexity of the formula, the parameter values cannot be obtained directly, so the Markov chain Monte Carlo algorithm (Markov chain Monte Carlo algorithm) needs to be used. Chain Monte Carlo, MCMC).
  • the MCMC algorithm can obtain function characteristics through continuous sampling, thereby inferring the parameters of the population through samples. Therefore, the likelihood part in Bayesian is calculated through the MCMC algorithm to estimate the parameter distribution.
  • the HDDM toolbox of a computer programming language can be used, which provides hierarchical Bayesian parameter estimation of the drift-diffusion model, allowing the drift-diffusion model parameters of each subject to be estimated simultaneously, thereby obtaining the drift rate, decision boundaries and non-decision time.
  • the classification results and reaction time into the preset drift diffusion model for processing.
  • the relative starting point, the inter-training variation of the relative starting point, and the drift rate can also be obtained. Parameters such as inter-training variation and inter-training variation of non-decision time.
  • the relative starting point is used to describe the starting preference for response selection.
  • Inter-session variation in relative starting points is expressed as a uniformly distributed range of mean relative starting points, describing the distribution of actual starting points for a particular training session.
  • the inter-training variation in drift rate is expressed as the standard deviation of a normal distribution, and the mean is the drift rate, which is used to describe the actual drift rate distribution for a specific training.
  • Inter-training variation in non-decision time is represented by a uniformly distributed range of mean non-decision time and is used to describe the distribution of actual non-decision time in training.
  • S205 Evaluate the user's perceptual decision-making ability based on the drift rate, decision boundary and non-decision time.
  • parameters such as drift rate, decision boundary, and non-decision time each correspond to different indicator ranges.
  • the indicator range corresponding to the drift rate can be greater than -5 and less than 5
  • the indicator range corresponding to the decision boundary can be greater than 0.5 and less than 2
  • the indicator range corresponding to the non-decision time can be greater than 0.1 and less than 0.5.
  • the user's perceptual decision-making ability is evaluated to be strong. If two of the user's drift rate, decision boundary, and non-decision time are within their respective corresponding indicator ranges, the user's perceived decision-making ability is assessed to be moderate. If any of the user's drift rate, decision boundary, and non-decision time is within the corresponding indicator range, or if the user's drift rate, decision boundary, and non-decision time are not within the corresponding indicator range, the user's perceived decision-making ability is assessed to be poor. . This is only an illustrative description without limitation.
  • the user's classification results and reaction time are processed through a preset drift diffusion model to obtain the drift rate, decision boundary and non-decision time.
  • the drift rate, decision boundary and non-decision time Through parameters such as drift rate, decision boundary and non-decision time, the user's potential cognitive process in the process of cross-channel perception decision-making can be accurately reflected. Then the drift rate, decision boundary and non-decision time can be analyzed accurately to accurately evaluate The user’s perceptual decision-making ability.
  • Figure 6 is a specific flow chart of a training method according to yet another exemplary embodiment of the present application.
  • the training method shown in Figure 6 may include: S301 ⁇ S306. It is worth noting that S301 ⁇ S305 in this embodiment are exactly the same as S201 ⁇ S205 in the embodiment corresponding to Figure 5. For details, refer to the description of S201 ⁇ S205 in the embodiment corresponding to Figure 5. There are no differences in this embodiment. Again. S306 details are as follows:
  • S306 Determine the user's health status based on the user's perceptual decision-making ability.
  • some conditions can reduce a user's perceptual decision-making ability.
  • Alzheimer's disease can reduce their perceptual decision-making abilities.
  • the health state of the user in this training is determined to be an unhealthy state.
  • the user of this training is a patient with Alzheimer's disease. This is only an illustrative description without limitation.
  • the user's health status can be accurately determined. For example, it is beneficial to accurately and timely detect Alzheimer's disease patients so that Alzheimer's disease patients can be treated as early as possible.
  • the training method provided by this application may also include: obtaining M preset pictures; adjusting the basic attributes of each preset picture to obtain M first pictures; construct a visual classification task based on M first pictures.
  • Basic attributes can include the spatial frequency, contrast, brightness, pixels, size, clarity, format, etc. of the image. For example, get several preset images, half of which contain faces and the other half of which contain cars. Adjust the spatial frequency, contrast, brightness and pixels of these images to be consistent. For example, you can adjust the pixels to 670 ⁇ 670 pixels.
  • the clarity of each picture is adjusted to 8 different levels of 30%, 32.5%, 35%, 37.5%, 40%, 42.5%, 45%, and 50% through the signal-to-noise ratio.
  • M first pictures are obtained, for example, 240 first pictures are obtained.
  • the correct option for each first picture is to click the option on the left of the two options to be displayed side by side, or the first picture corresponds to The correct choice is to click the right option of the two options displayed side by side, or the correct choice corresponding to the first picture is to click the left mouse button, or the correct choice corresponding to the first picture is to click the right mouse button, etc.
  • a visual classification task is constructed.
  • each first picture obtained has its basic attributes adjusted, which effectively avoids training bias caused by differences in the basic attributes of each picture and ensures that the basic attributes of each picture will not affect the user's choice. , thus improving the accuracy of training results.
  • the training method provided by this application may also include: obtaining N preset sounds; adjusting the sound attributes of each preset sound to obtain N First sounds; construct an auditory classification task based on N first sounds.
  • the preset sound refers to the original first sound.
  • Sound attributes can include frequency, pitch, loudness, timbre, etc. of the sound. For example, get several preset sounds, half of which are human sounds and the other half are car sounds. Adjust the loudness and frequency of these sounds to be consistent.
  • preset software such as Matlab software
  • the loudness and frequency of the processed sounds are adjusted to be consistent.
  • speech synthesis software uses speech synthesis software to embed these processed sounds into white noise of different loudness to obtain the first sounds with different signal-to-noise ratios.
  • the loudness of the processed sound can be reduced to 50%, and speech synthesis software can be used to embed the loudness-adjusted sounds into eight white noises of different loudnesses, resulting in signal-to-noise ratios of 12.5% and 25 respectively. %, 37.5%, 50%, 62.5%, 75%, 87.5% and 100% of multiple first voices.
  • the loudness of these first sounds is consistent, for example, the loudness of these first sounds is 60dB.
  • Adjust the sharpness of each picture via signal-to-noise ratio For example, the clarity of each picture is adjusted to 8 different levels of 30%, 32.5%, 35%, 37.5%, 40%, 42.5%, 45%, and 50% through the signal-to-noise ratio.
  • N first sounds are obtained, for example, 240 first sounds are obtained.
  • the correct option is set for each first sound.
  • the correct choice for the first sound is to click on the left option of the two options to display side by side, or for the first sound to The correct choice is to click the right option of the two options displayed side by side, or the correct choice for the first sound is to click the left mouse button, or the correct choice for the first sound is to click the right mouse button, etc.
  • an auditory classification task is constructed.
  • each first sound obtained has adjusted sound attributes, which effectively avoids training deviations caused by differences in the attributes of each sound and ensures that the sound attributes of each sound will not affect the user's choice. This improves the accuracy of training results.
  • the training method provided by this application may also include: determining L second pictures among the M first pictures; Determine L second sounds among the sounds; pair L second pictures and L second sounds to obtain L audio-visual stimulus pairs; construct an audio-visual classification task based on the L audio-visual stimulus pairs.
  • the second picture in the audio-visual stimulus pair may be selected from the first picture, and the second sound in the audio-visual stimulus pair may be selected from the first sound.
  • L first pictures are selected from M first pictures, and these L first pictures are determined as L second pictures. Select L first sounds from the N first sounds, and determine these L first sounds as L second sounds.
  • the second sound is the sound corresponding to the target in the second picture
  • select the sound corresponding to the target in the second picture. picture For example, a certain first picture is selected as a picture containing a car among M first pictures, and the sound corresponding to the car happens to be among the N first sounds.
  • the selected first picture is determined as the second picture, and the N first pictures are selected as the second pictures.
  • the sound corresponding to the first sound of the car is determined as the second sound.
  • each audio-visual stimulus pair set the correct option for each audio-visual stimulus pair.
  • the correct choice for the audio-visual stimulus pair is to click the left option of the two options to display side by side, or the correct choice for the audio-visual stimulus pair is to click to display side by side.
  • the right option among the two options, or the correct choice of the audio-visual stimulus pair is to click the left mouse button, or the correct choice of the audio-visual stimulus pair is to click the right mouse button, etc.
  • an audio-visual classification task is constructed.
  • the second picture and the second sound in each audio-visual stimulus pair are selected from the first picture and the first sound. Since the basic attributes of the first picture and the sound attributes of the first sound have been adjusted, it is equivalent to the basic attributes of the second picture and the sound attributes of the second sound in each audio-visual stimulus pair being adjusted. It effectively avoids training bias caused by differences in the basic attributes and sound attributes of each picture, ensuring that the basic attributes and sound attributes of each picture will not affect the user's choice, thus improving the accuracy of the training results.
  • pre-training may also be included before formal training.
  • the training method provided by this application may also include: S401 ⁇ S405.
  • S401 Determine the stimulation intensity corresponding to each first picture and each first sound.
  • S402 Select the picture whose stimulation intensity is the first stimulation intensity and the picture whose stimulation intensity is the second stimulation intensity among the M first pictures.
  • S403 Select the sound whose stimulation intensity is the first stimulation intensity and the sound whose stimulation intensity is the second stimulation intensity from the N first sounds.
  • the stimulus intensity is used to reflect the corresponding accuracy of each first picture and each first sound when they are classified.
  • M first pictures are displayed in the display interface of the training device, the user makes a selection operation on each first picture, and the selection operation corresponding to each first picture is performed with the correct selection corresponding to the first picture. Compare. One point is scored for each correct choice. No points are scored for no selection or wrong selection. A score can be obtained based on all the selection operations of the user this time, and the proportion of this score to the total score (the corresponding score when all the first pictures are selected correctly) is calculated. , to obtain the accuracy of the user's pre-training.
  • the stimulation intensity for the user to select the correct first picture is determined.
  • the accuracy rate is the first threshold
  • the user selects the stimulation intensity of the correct first picture as the first stimulation intensity
  • the accuracy rate is the second threshold
  • the user selects the stimulation intensity of the correct first picture as the second stimulation intensity.
  • the first threshold is greater than the second threshold, and the first stimulation intensity is higher than the second stimulation intensity.
  • the first threshold is 90%
  • the second threshold is 70%
  • the first stimulation intensity is high intensity
  • the second stimulation intensity is low intensity.
  • the accuracy rate of this pre-training is 90%, and this time the user selects the correct stimulation intensity of the first picture as the first stimulation intensity, that is, high intensity.
  • the accuracy rate of this pre-training is 70%, and this time the user selects the correct stimulation intensity of the first picture as the second stimulation intensity, that is, low intensity.
  • N first sounds are displayed in the display interface of the training device, the user makes a selection operation for each first sound, and the selection operation corresponding to each first sound is performed with the correct selection corresponding to the first sound. Compare. One point is scored for each correct choice. No points are scored for no choice or wrong choice. A score can be obtained based on all the selection operations of the user this time, and the proportion of this score to the total score (the corresponding score when all first voice selections are correct) is calculated. , to obtain the accuracy of the user's pre-training.
  • the stimulation intensity of the first sound selected correctly by the user is determined.
  • the accuracy rate is the first threshold
  • the user selects the correct stimulation intensity of the first sound as the first stimulation intensity
  • the accuracy rate is the second threshold
  • the user selects the correct stimulation intensity of the first sound as the second stimulation intensity.
  • the first threshold is greater than the second threshold, and the first stimulation intensity is higher than the second stimulation intensity.
  • the first threshold is 90%
  • the second threshold is 70%
  • the first stimulation intensity is high intensity
  • the second stimulation intensity is low intensity.
  • the accuracy rate of this pre-training is 90%, and this time the user selects the correct stimulation intensity of the first sound as the first stimulation intensity, that is, high intensity.
  • the accuracy rate of this pre-training is 70%, and this time the user selects the correct stimulation intensity of the first sound as the second stimulation intensity, that is, low intensity.
  • S404 Construct a perceptual decision-making task of the first stimulus intensity based on the picture of the first stimulus intensity and the sound of the first stimulus intensity.
  • the perceptual decision-making tasks of the first stimulus intensity include the visual classification task of the first stimulus intensity, the auditory classification task of the first stimulus intensity, and the visual and auditory classification task of the first stimulus intensity.
  • the process of constructing the visual classification task of the first stimulus intensity, the auditory classification task of the first stimulus intensity, and the visual and auditory classification task of the first stimulus intensity is different from the above-mentioned construction of the visual classification task, the auditory classification task and the visual and auditory classification.
  • the task process is similar.
  • the above-mentioned construction of the visual classification task, the auditory classification task and the visual and auditory classification task are based on the first picture, the first sound, the second picture and the second sound. In this embodiment, they are based on the first stimulation intensity. The picture and sound construct of the first stimulus intensity are obtained.
  • the specific process can be referred to the above-mentioned process of constructing a visual classification task, an auditory classification task, and a visual and auditory classification task, and will not be described again here.
  • the constructed visual classification task of the first stimulus intensity contains 50 pictures of the first stimulus intensity
  • the auditory classification task of the first stimulus intensity contains 50 sounds of the first stimulus intensity
  • the visual and auditory classification task of the first stimulus intensity included 50 pairs of audio-visual stimuli.
  • S405 Construct a perceptual decision-making task of the second stimulus intensity based on the picture of the second stimulus intensity and the sound of the second stimulus intensity.
  • the perceptual decision-making tasks of the second stimulus intensity include the visual classification task of the second stimulus intensity, the auditory classification task of the second stimulus intensity, and the visual and auditory classification task of the second stimulus intensity.
  • the process of constructing the visual classification task of the second stimulus intensity, the auditory classification task of the second stimulus intensity, and the visual and auditory classification task of the second stimulus intensity is different from the above-mentioned construction of the visual classification task, the auditory classification task, and the visual and auditory classification.
  • the task process is similar.
  • the above-mentioned construction of the visual classification task, the auditory classification task and the visual and auditory classification task are based on the first picture, the first sound, the second picture and the second sound. In this embodiment, they are based on the second stimulation intensity. The picture and the sound of the second stimulus intensity are constructed.
  • the specific process can be referred to the above-mentioned process of constructing a visual classification task, an auditory classification task, and a visual and auditory classification task, and will not be described again here.
  • the constructed visual classification task of the second stimulus intensity contains 50 pictures of the second stimulus intensity
  • the auditory classification task of the second stimulus intensity contains 50 sounds of the second stimulus intensity
  • the visual and auditory task of the second stimulus intensity included 50 pairs of audio-visual stimuli.
  • different users are trained using the constructed perceptual decision-making task of the first stimulation intensity and the perceptual decision-making task of the second stimulation intensity.
  • perceptual decision-making tasks with different stimulation intensities are constructed.
  • Perceptual decision-making tasks with different stimulation intensities can be used for training for different users, and the perceptual decision-making abilities of different users can be improved in a targeted manner.
  • the training method provided by this application may also include: adjusting the difficulty of the perceptual decision-making task according to the training results, thereby more effectively improving the user's perceptual decision-making ability.
  • the user's accuracy in completing a perceptual decision-making task is greater than the preset accuracy rate, it proves that the user's current training effect is good and the difficulty of the perceptual decision-making task can be increased. For example, you can gradually increase the types of pictures and sounds in the perceptual decision-making task, shorten the preset time interval between two adjacent classification tasks, increase the options corresponding to each classification task, etc.
  • the user's accuracy in completing the perceptual decision-making task is less than or equal to the preset accuracy rate, it proves that the user's current training effect is not good, and the difficulty of the perceptual decision-making task can be reduced.
  • the types of pictures and sounds in perceptual decision-making tasks can be reduced, and the preset time interval between two adjacent classification tasks can be increased.
  • the training method provided in this application can also use a competitive model to study the impact of cross-channels on patients with Alzheimer's disease. Individuals respond faster when visual and auditory information are presented simultaneously compared to single-channel information (such as visual information or auditory information). This phenomenon is called the redundant signal effect (RSE).
  • RSE redundant signal effect
  • RSE can be explained by statistical facilitation, that is, individuals respond to the single-channel stimulus (visual stimulus, auditory stimulus) that reaches the sensory threshold first among multi-sensory channel stimuli (visual and auditory stimuli), resulting in dual-channel information even though Reactions to stimuli can be accelerated without integration occurring.
  • the individual can first reach the sensory threshold in multi-sensory channel stimulation (visual and auditory stimulation), thereby improving the individual's perceptual decision-making ability.
  • FIG. 7 is a schematic diagram of a training device provided by an embodiment of the present application. As shown in Figure 7, the training device provided by this embodiment includes:
  • the display unit 510 is used to randomly display perceptual decision-making tasks.
  • the perceptual decision-making tasks include visual classification tasks, auditory classification tasks, and visual and auditory classification tasks.
  • the visual classification tasks include classifying M first pictures respectively.
  • the auditory classification tasks The task includes classifying N first sounds respectively, and the audio-visual classification task includes classifying L audio-visual stimulus pairs, each of the audio-visual stimulus pairs including a second picture and a target corresponding to the second picture.
  • the second sound where M ⁇ 2, N ⁇ 2, L ⁇ 2;
  • the collection unit 520 is used to collect behavioral response data generated by the user when completing the perceptual decision-making task
  • the determining unit 530 is configured to determine training results according to the behavioral response data, where the training results include the accuracy of the user completing the perceptual decision-making task.
  • the behavioral response data includes classification results corresponding to each classification task in the perceptual decision-making task and reaction times for completing each classification task.
  • the training device also includes:
  • An evaluation unit is used to input the classification result and the reaction time into a preset drift diffusion model for processing, and obtain the drift rate, decision boundary and non-decision time; according to the drift rate, the decision boundary and the The non-decision time is used to evaluate the user's perceived decision-making ability.
  • the training device also includes:
  • a state determination unit is configured to determine the health state of the user based on the user's perceptual decision-making ability.
  • the training device also includes:
  • the first construction unit is used to obtain M preset pictures; adjust the basic attributes of each preset picture to obtain M first pictures; and construct the visual classification task based on the M first pictures.
  • the training device also includes:
  • the second construction unit is used to obtain N preset sounds; adjust the sound attributes of each preset sound to obtain N first sounds; and construct the auditory classification task based on the N first sounds.
  • the training device also includes:
  • a third construction unit configured to determine L second pictures among the M first pictures; determine L second sounds among the N first sounds; and combine the L second pictures and the L second sounds are paired to obtain the L audio-visual stimulus pairs; the audio-visual classification task is constructed based on the L audio-visual stimulus pairs.
  • the training device also includes:
  • a third construction unit is used to determine the stimulation intensity corresponding to each of the first pictures and each of the first sounds.
  • the stimulation intensity is used to reflect each of the first pictures and each of the first sounds. The corresponding accuracy rate when the sounds are classified; select the picture with the stimulation intensity as the first stimulation intensity and the picture with the stimulation intensity as the second stimulation intensity among the M first pictures; select from the N first sounds
  • the stimulation intensity is the sound of the first stimulation intensity and the stimulation intensity is the sound of the second stimulation intensity; according to the picture of the first stimulation intensity and the sound of the first stimulation intensity, a perceptual decision-making task of the first stimulation intensity is constructed; according to The picture of the second stimulation intensity and the sound of the second stimulation intensity construct a perceptual decision-making task of the second stimulation intensity.
  • FIG. 8 is a schematic diagram of a training device provided by another embodiment of the present application.
  • the training device 6 of this embodiment includes: a processor 60 , a memory 61 and a computer program 62 stored in the memory 61 and executable on the processor 60 .
  • the processor 60 executes the computer program 62, it implements the steps in each of the above training method embodiments, such as S101 to S103 shown in FIG. 1 .
  • the processor 60 executes the computer program 62, it implements the functions of each unit in the above embodiments, such as the functions of units 510 to 530 shown in FIG. 7 .
  • the computer program 62 may be divided into one or more units, and the one or more units are stored in the memory 61 and executed by the processor 60 to complete the present application.
  • the one or more units may be a series of computer instruction segments capable of completing specific functions.
  • the instruction segments are used to describe the execution process of the computer program 62 in the training device 6 .
  • the computer program 62 can be divided into a display unit, a collection unit and a determination unit, and the specific functions of each unit are as described above.
  • the training device may include, but is not limited to, a processor 60 and a memory 61 .
  • FIG. 8 is only an example of the training device 6 and does not constitute a limitation of the device. It may include more or fewer components than shown in the figure, or combine certain components, or different components, such as
  • the training equipment may also include input and output equipment, network access equipment, buses, etc.
  • the processor 60 may be a central processing unit (Central Processing Unit). Processing Unit (CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor (DSP), Application Specific Integrated Circuit (Application Specific Integrated Circuit (ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the memory 61 may be an internal storage unit of the training device, such as a hard disk or memory of the device.
  • the memory 61 may also be an external storage terminal of the training device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), or a secure digital card equipped on the training device. Digital, SD) card, flash memory card (Flash Card) etc.
  • the memory 61 may also include both an internal storage unit of the device and an external storage terminal.
  • the memory 61 is used to store the computer instructions and other programs and data required by the terminal.
  • the memory 61 can also be used to temporarily store data that has been output or is to be output.
  • Embodiments of the present application also provide a computer storage medium.
  • the computer storage medium may be non-volatile or volatile.
  • the computer storage medium stores a computer program. When the computer program is executed by a processor, the above trainings are implemented. Steps in method embodiments.
  • This application also provides a computer program product.
  • the computer program product When the computer program product is run on a training device, it causes the device to perform the steps in each of the above training method embodiments.
  • Embodiments of the present application also provide a chip or integrated circuit.
  • the chip or integrated circuit includes: a processor, configured to call and run a computer program from a memory, so that the training equipment installed with the chip or integrated circuit performs each of the above trainings. Steps in method embodiments.
  • Module completion means dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above.
  • Each functional unit and module in the embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above-mentioned integrated unit can be hardware-based. It can also be implemented in the form of software functional units.
  • the specific names of each functional unit and module are only for the convenience of distinguishing each other and are not used to limit the scope of protection of the present application.
  • For the specific working processes of the units and modules in the above system please refer to the corresponding processes in the foregoing method embodiments, and will not be described again here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

Un procédé d'apprentissage, un appareil d'apprentissage, un dispositif d'apprentissage et un support de stockage, qui se rapportent au domaine technique des ordinateurs. Le procédé d'apprentissage comprend : l'affichage aléatoire d'une tâche de prise de décision perceptuelle, qui comprend une tâche de classification visuelle, une tâche de classification auditive et une tâche de classification visuelle-auditive, la tâche de classification visuelle comprenant respectivement la classification de M premières images, la tâche de classification auditive comprenant respectivement la classification de N premiers sons, et la tâche de classification visuelle-auditive comprenant respectivement la classification de L paires de stimulation visuelle-auditive, chaque paire de stimulation visuelle-auditive comprenant une seconde image et un second son correspondant à une cible dans la seconde image ; la collecte de données de réaction de comportement générées lorsqu'un utilisateur achève la tâche de prise de décision perceptuelle ; et la détermination d'un résultat d'apprentissage en fonction des données de réaction de comportement, le résultat d'apprentissage comprenant la précision de l'utilisateur achevant la tâche de prise de décision perceptuelle. Dans le procédé d'apprentissage, l'apprentissage est effectué dans un mode de combinaison multicanal au moyen de canaux visuels, auditifs et auditifs-visuels, de telle sorte que la capacité de prise de décision perceptuelle d'un individu peut être efficacement améliorée.
PCT/CN2022/138186 2022-06-13 2022-12-09 Procédé d'apprentissage, appareil d'apprentissage, dispositif d'apprentissage et support de stockage WO2023240951A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210661128.7 2022-06-13
CN202210661128.7A CN115171658A (zh) 2022-06-13 2022-06-13 训练方法、训练装置、训练设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023240951A1 true WO2023240951A1 (fr) 2023-12-21

Family

ID=83486133

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/138186 WO2023240951A1 (fr) 2022-06-13 2022-12-09 Procédé d'apprentissage, appareil d'apprentissage, dispositif d'apprentissage et support de stockage

Country Status (2)

Country Link
CN (1) CN115171658A (fr)
WO (1) WO2023240951A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171658A (zh) * 2022-06-13 2022-10-11 深圳先进技术研究院 训练方法、训练装置、训练设备及存储介质
CN115691545B (zh) * 2022-12-30 2023-05-26 杭州南粟科技有限公司 一种基于vr游戏的范畴知觉训练方法及系统
CN118609808B (zh) * 2024-05-27 2025-05-27 中国人民解放军空军军医大学 一种多模态附件区包块恶性风险评估方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105266805A (zh) * 2015-10-23 2016-01-27 华南理工大学 一种基于视听觉脑机接口的意识状态检测方法
US20190159715A1 (en) * 2016-08-05 2019-05-30 The Regents Of The University Of California Methods of cognitive fitness detection and training and systems for practicing the same
CN110022768A (zh) * 2016-08-26 2019-07-16 阿克利互动实验室公司 与生理组件联接的认知平台
CN110347242A (zh) * 2019-05-29 2019-10-18 长春理工大学 基于空间和语义一致的视听觉脑机接口拼写系统及其方法
CN110786825A (zh) * 2019-09-30 2020-02-14 浙江凡聚科技有限公司 基于虚拟现实视听觉通路的空间知觉失调测训系统
CN114201053A (zh) * 2022-02-17 2022-03-18 北京智精灵科技有限公司 一种基于神经调控的认知提升训练方法及系统
CN115171658A (zh) * 2022-06-13 2022-10-11 深圳先进技术研究院 训练方法、训练装置、训练设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018017767A1 (fr) * 2016-07-19 2018-01-25 Akili Interactive Labs, Inc. Plates-formes servant à la mise en œuvre de mesures de détection de signal dans des procédures à échéance de réponse adaptatives
KR102466438B1 (ko) * 2020-11-06 2022-11-14 주식회사 더헤케이브아트앤스포츠 인지 기능 평가 시스템 및 인지 기능 평가 방법
CN113113115B (zh) * 2021-04-09 2022-11-08 北京未名脑脑科技有限公司 认知训练方法、系统及存储介质
CN114068012B (zh) * 2021-11-15 2022-05-10 北京智精灵科技有限公司 一种面向认知决策的多维分层漂移扩散模型建模方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105266805A (zh) * 2015-10-23 2016-01-27 华南理工大学 一种基于视听觉脑机接口的意识状态检测方法
US20190159715A1 (en) * 2016-08-05 2019-05-30 The Regents Of The University Of California Methods of cognitive fitness detection and training and systems for practicing the same
CN110022768A (zh) * 2016-08-26 2019-07-16 阿克利互动实验室公司 与生理组件联接的认知平台
CN110347242A (zh) * 2019-05-29 2019-10-18 长春理工大学 基于空间和语义一致的视听觉脑机接口拼写系统及其方法
CN110786825A (zh) * 2019-09-30 2020-02-14 浙江凡聚科技有限公司 基于虚拟现实视听觉通路的空间知觉失调测训系统
CN114201053A (zh) * 2022-02-17 2022-03-18 北京智精灵科技有限公司 一种基于神经调控的认知提升训练方法及系统
CN115171658A (zh) * 2022-06-13 2022-10-11 深圳先进技术研究院 训练方法、训练装置、训练设备及存储介质

Also Published As

Publication number Publication date
CN115171658A (zh) 2022-10-11

Similar Documents

Publication Publication Date Title
Manning et al. Taking language samples home: Feasibility, reliability, and validity of child language samples conducted remotely with video chat versus in-person
WO2023240951A1 (fr) Procédé d'apprentissage, appareil d'apprentissage, dispositif d'apprentissage et support de stockage
Vargas-Cuentas et al. Developing an eye-tracking algorithm as a potential tool for early diagnosis of autism spectrum disorder in children
Thomas-Stonell et al. Predicted and observed outcomes in preschool children following speech and language treatment: Parent and clinician perspectives
Walker et al. Trends and predictors of longitudinal hearing aid use for children who are hard of hearing
Holte et al. Factors influencing follow-up to newborn hearing screening for infants who are hard of hearing
O’Brian et al. Measurement of stuttering in adults
Nkyekyer et al. The cognitive and psychosocial effects of auditory training and hearing aids in adults with hearing loss
Horn et al. Development of visual attention skills in prelingually deaf children who use cochlear implants
Wang et al. Attention to speech and spoken language development in deaf children with cochlear implants: A 10‐year longitudinal study
McNaney et al. Speeching: Mobile crowdsourced speech assessment to support self-monitoring and management for people with Parkinson's
Chan et al. Voice therapy for Parkinson’s disease via smartphone videoconference in Malaysia: A preliminary study
Jackson et al. Rate of language growth in children with hearing loss in an auditory-verbal early intervention program
Choi et al. Hearing and auditory processing abilities in primary school children with learning difficulties
James et al. Increased rate of listening difficulties in autistic children
Venail et al. Speech perception, real-ear measurements and self-perceived hearing impairment after remote and face-to-face programming of hearing aids: A randomized single-blind agreement study
Galazka et al. Facial speech processing in children with and without dyslexia
Ambrose et al. Assessing vocal development in infants and toddlers who are hard of hearing: A parent-report tool
Gelfer et al. Speaking fundamental frequency and individual variability in Caucasian and African American school-age children
Wilson et al. A preliminary investigation of sound-field amplification as an inclusive classroom adjustment for children with and without autism spectrum disorder
Gijbels et al. Audiovisual speech processing in relationship to phonological and vocabulary skills in first graders
Natzke et al. Measuring speech production development in children with cerebral palsy between 6 and 8 years of age: Relationships among measures
Stipancic et al. Improving perceptual speech ratings: The effects of auditory training on judgments of dysarthric speech
Walravens et al. Consistency of hearing aid setting preference in simulated real-world environments: Implications for trainable hearing aids
McAllister et al. Baseline stimulability predicts patterns of response to traditional and ultrasound biofeedback treatment for residual speech sound disorder

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946617

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22946617

Country of ref document: EP

Kind code of ref document: A1