[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115634356B - System and method for information implantation and memory editing in sleep - Google Patents

System and method for information implantation and memory editing in sleep Download PDF

Info

Publication number
CN115634356B
CN115634356B CN202211442674.8A CN202211442674A CN115634356B CN 115634356 B CN115634356 B CN 115634356B CN 202211442674 A CN202211442674 A CN 202211442674A CN 115634356 B CN115634356 B CN 115634356B
Authority
CN
China
Prior art keywords
thread
sleep
memory
time
stimulus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211442674.8A
Other languages
Chinese (zh)
Other versions
CN115634356A (en
Inventor
王鹏云
李娟�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Psychology of CAS
Original Assignee
Institute of Psychology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Psychology of CAS filed Critical Institute of Psychology of CAS
Priority to CN202211442674.8A priority Critical patent/CN115634356B/en
Publication of CN115634356A publication Critical patent/CN115634356A/en
Application granted granted Critical
Publication of CN115634356B publication Critical patent/CN115634356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses a system and a method for information implantation and memory editing in sleep, wherein the system and the method can firstly learn the combination of 'A-clue one' and the combination of 'clue two-B' in a waking state, and then present the pairing of the clue one and the clue two in deep sleep, thereby realizing the binding of the A and the B in the deep sleep; a and B can be events, objects, even high-level semantic information such as emotion and social attitudes, and the like, so that a universal memory implantation and editing tool is formed, and high-level semantic information implantation and memory editing in deep sleep can be realized.

Description

System and method for information implantation and memory editing in sleep
Technical Field
The invention relates to sleep and memory related research, in particular to a system and a method for information implantation and memory editing in sleep.
Background
In the sleeping process of human, the technology of implanting high-level semantic information and memorizing and editing is required by human sleepiness for hundreds of years, but is a scientific problem which is never really realized by human so far. As a result of a great deal of research, people cannot process and learn high-level semantic information in deep sleep, so that new memories cannot be formed (memory implantation) and the memories cannot be further modified (memory editing).
With respect to "learning new information while sleeping," the most straightforward understanding and presentation to people is the learning of verbal declarative information, such as learning new english words by playing a word recording. In this respect, positive results began to appear in the study at the beginning of the last century. But by the 50 s of the 20 th century, these results were criticized experimentally, statistically, and most importantly, were questioned as to whether the subjects were actually in a "asleep" state at the time of study. Later studies have used electroencephalographic analysis to show that the ability to learn information during sleep is related to the time of presentation of the alpha rhythm in the electroencephalogram, that is, the time at which these "learning" occurs, when it is actually being attempted and not asleep, and if the sound information for these words appears during the second phase of sleep and REM sleep, then learning is not successful.
Many subsequent studies found that adult human memory of new verbal information during sleep (declarative memory) was essentially impracticable. Learning of hippocampal dependence is particularly difficult in sleep, especially when learning the union of two high-level semantic concepts, i.e. the binding between two concepts, during sleep. Although the attention of scientists has been drawn very early, previous research has not been achieved.
However, some recent high-impact research efforts provide a basis for achieving this goal.
On the one hand, the Paller group at the North-West university of America found that presenting audio cues associated with previously learned content during the deep sleep stage enhanced the relevant memory. Odor cues also have a similar effect. Hu et al found that this principle can be used to alter social bias. This phenomenon is called target memory reactivation (targ et memory reactivation).
The discovery and subsequent development of TMR technology is undoubtedly an important milestone in the research field relating to sleep-dependent memory consolidation, and in particular with TMR technology, the "remodeling" of memory in the unconscious sleep state: either enhanced or weakened. The method realizes the active control of human beings on the memory in the sleep process, and can achieve the effects that the memory cannot be achieved or is inconvenient to achieve in an conscious state.
However, these studies using TMR technology are based on a fundamental fact: the subject merely "remodels" (enhances or regresses) the memory already present in the brain and sea before sleeping, and does not learn any new information during sleep. This is a big gap from the original intention of "new information implantation and memory editing during sleep".
On the other hand, recent studies have found that a simple association between sound and smell (a conditionally-reflective relationship, without high-level semantic information) can be learned during deep sleep, and that such association can be maintained after waking up. When people smell the fragrance, the breathing volume can be increased unconsciously, and when people smell the odor, the breathing volume can be reduced unconsciously. Based on this phenomenon, arzi et al presented the combination of "high frequency pure tone-fragrance" and "low frequency pure tone-odor" to the subject during the deep sleep stage of the subject, and after multiple presentations, researchers found that if high frequency tones were presented alone during the deep sleep stage of the subject, the subject would increase the respiration rate, and low frequency tones were presented alone, the subject would decrease the respiration rate. This indicates that the combination of pure tone and smell was tried to be learned during deep sleep. More interestingly, after the testee wakes up, the testee is presented with high-frequency pure tone, the testee also involuntarily deepens breathing, the testee also involuntarily presents low-frequency pure tone, and the testee involuntarily reduces breathing, which shows that the combination relationship of pure tone and smell learned under deep sleep still exists after the testee wakes up. Subsequently, arzi et al found a match of smoke and malodorous gases in deep sleep to the subject attempting to quit smoking, and found that the smoking behavior of the subject upon waking was reduced. These two related studies are the first direct evidence in the world that new information can be formed during deep sleep (hippocampal-dependent paired information learning, here sound-odor, and odor-odor combinations).
However, the newly learned information is a classical conditional reflex type reaction coupling and lacks high-level semantic information. Implantation and editing that contains truly meaningful memory (high-level semantic information) cannot be achieved. Therefore, finding a general method for performing high-level semantic information editing and embedding in deep sleep becomes a problem to be solved urgently.
Based on the above, the inventor of the present invention has made further intensive research on the memory editing technology in sleep, so as to design a new system and method for truly realizing the editing and embedding of high-level semantic information in sleep, thereby being capable of binding events, objects, even high-level semantic information such as emotions and social attitudes, and the like with each other in deep sleep to form a universal memory embedding and editing system and method.
Disclosure of Invention
In order to overcome the above problems, the present inventors have conducted intensive studies to design a system and a method for information implantation and memory editing during sleep, in which a combination of "a-thread one" and a combination of "thread two-B" are learned in a wake state, and then a pair of thread one and thread two is presented in deep sleep, thereby achieving binding of a and B in deep sleep; a and B can be events, objects, even high-level semantic information such as emotion and social attitude, so that a universal memory implantation and editing tool is formed, high-level semantic information implantation in deep sleep can be realized, and the invention is completed.
Specifically, the present invention aims to provide a system for information implantation and memory editing during sleep, the system comprising:
a sleep stage monitor 1 for monitoring sleep stages in real time;
a sleep brain wave real-time phase monitor 2 for monitoring phase information of real-time sleep brain waves;
a sensory stimulator 3 for providing sensory stimuli;
and a processor 4 which is connected with the sleep stage monitor 1, the sleep brain wave real-time phase monitor 2 and the sensory stimulator 3 and performs information implantation and memory editing work by controlling the sensory stimulator 3.
Wherein, the sense stimulator 3 is used for providing learning memory content when the user is awake and also used for providing clues for inducing memory when the user is asleep;
the learning memory content comprises two parallel combinations, namely a combination of A and a clue I and a combination of B and a clue II;
the memory-inducing thread comprises a paired coupling of thread one and thread two.
Wherein the processor 4 comprises a sensory stimulation module 41,
the sensory stimulation module 41 is connected with the sensory stimulator 3 and is used for setting, storing and inputting parameters of the cue stimulation into the sensory stimulator 3;
preferably, the cue stimuli include sound, smell, light, motion, tactile information, temperature sensing information, electricity, magnetic field;
when the cue stimulus is sound, the parameters of the corresponding cue stimulus comprise sound size, sound frequency, sound content, duration and timbre;
when the clue stimulus is a smell, the corresponding parameters of the clue stimulus comprise the components of a substance generating the smell, the proportion of each component, the delivery mode of the smell, the ambient temperature when the smell is delivered, the ambient humidity when the smell is delivered and the ambient wind speed when the smell is delivered;
when the cue stimulus is light, the parameters of the corresponding cue stimulus comprise the frequency, the intensity, the presenting time point, the presenting duration and the presenting form of the light;
when the cue stimulus is motion, the corresponding parameters of the cue stimulus comprise the direction, acceleration, speed and duration of the motion;
when the cue stimulus is tactile information, the parameters of the corresponding cue stimulus include pressure intensity, stimulus frequency, stimulus form of the applied tactile information;
when the clue stimulus is electricity, the corresponding parameters of the clue stimulus comprise voltage and current of the electricity, a stimulus form, stimulus time, frequency and position;
when the cue stimulus is a magnetic field, the parameters of the corresponding cue stimulus include the strength, form, stimulus time, and stimulus location of the magnetic field.
The processor 4 includes a thread pairing coupling module 42,
the cue pairing coupling module 42 is connected with the sensory stimulator 3 and is used for setting, storing and inputting time information into the sensory stimulator 3;
the time information comprises the respective duration/presentation time of each time of two threads in the thread group, the interval duration of paired coupling of a thread I and a thread II in the thread group, and the interval duration between two adjacent thread groups;
the thread group refers to two threads that form a pair coupling.
Wherein the respective stimulation time of each of the first thread and the second thread is 500-3000 milliseconds;
the interval duration of the paired coupling of the thread one and the thread two in the thread group is 0-2000 milliseconds;
the time interval between two adjacent cue groups is more than 8000 milliseconds.
Wherein the processor 4 comprises a thread delivery time control module 43,
the thread delivery time control module 43 is connected with the sleep brain wave real-time phase monitor 2 and the sensory stimulator 3, and determines the delivery time of the thread inducing memory according to the phase information of the sleep brain waves.
Wherein the thread delivery timing control module 43 controls the delivery of the first of the memory-inducing threads in the ascending phase of sleep slow wave.
The application also provides a method for information implantation and memory editing in sleep, which is realized by the system for information implantation and memory editing in sleep.
The method comprises the following steps:
step 1, providing learning and memory contents through a sense stimulator 3 when a user is awake; the learning memory content comprises two parallel combinations, namely a combination of A and a first clue and a combination of B and a second clue;
step 2, providing clues for inducing memory through the sensory stimulator 3 when the user sleeps; the memory-inducing thread includes a paired coupling of thread one and thread two.
The invention has the advantages that:
(1) The system and the method for information implantation and memory editing in sleep provided by the invention can perform universal memory implantation and editing and can realize high-level semantic information implantation in deep sleep;
(2) According to the system and the method for information implantation and memory editing in sleep, provided by the invention, two independent threads are paired and coupled, so that a new association is induced between two independent memories corresponding to the two threads, and a new implantation concept is formed;
(3) According to the system and the method for information implantation and memory editing in sleep, provided by the invention, the effect and the efficiency of memory implantation are improved by setting the interval duration and the intensity of two specific clues and the time interval between paired and coupled clue groups;
(4) According to the system and the method for information implantation and memory editing in sleep, which are provided by the invention, the effect and the efficiency of memory implantation are improved by setting the specific clue releasing time.
Drawings
FIG. 1 illustrates an overall logic diagram of a system for information implantation and memory editing during sleep in accordance with a preferred embodiment of the present invention;
FIG. 2 is a diagram illustrating the correspondence between learning and memorizing contents and binding relationships in an embodiment of the present invention;
FIG. 3 is a diagram illustrating a hidden contact test in an embodiment of the invention;
FIG. 4 is a schematic diagram showing the attitude changes to cats and dogs before and after sleep in an embodiment of the invention;
FIG. 5 is a schematic diagram showing the attitude change of cats and dogs before and after sleep in comparative example 1 of the invention;
fig. 6 is a graph showing the change of attitudes before and after sleep in cat and dog according to comparative example 2 of the present invention.
Description of the reference numerals
1-sleep stage monitor
2-sleep brain wave real-time phase monitor
3-sense organ stimulator
4-processor
41-sense stimulation module
42-thread pairing coupling module
43-release time course control module
Detailed Description
The invention is explained in more detail below with reference to the figures and examples. The features and advantages of the present invention will become more apparent from the description.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
According to the system for information implantation and memory editing in sleep provided by the invention, as shown in fig. 1, the system comprises a sleep stage monitor 1, a sleep brain wave real-time phase monitor 2, a sense stimulator 3 and a processor 4;
the sleep stage monitor 1 is used for monitoring sleep stages in real time and performing sleep staging. According to the sleep characteristics, the sleep stages are divided into a waking stage, a non-rapid eye movement sleep 1 stage, a non-rapid eye movement sleep 2 stage, a non-rapid eye movement sleep 3/4 stage and a rapid eye movement sleep 5 stages. In the application, the sleep stage monitor is arranged to monitor the sleep stages, so that on one hand, a subject can be determined to carry out memory editing in a sleep state, namely, the memory control/editing in an unconscious state in sleep is ensured; if the subject wakes up during sleep due to its own factors (e.g., pain or sleep apnea) or external factors (e.g., external noise), the sleep stage monitor can detect in real time that the subject wakes up, thereby ceasing the subsequent delivery of the cue stimuli. On the other hand, a number of studies have found that different memory types are sleep consolidated in different stages of sleep. For example, declarative memory (such as remembering what happened on a day, as another example, reciting english words) is consolidated during non-rapid eye movement sleep stages; and the procedural memory (such as skills of learning to ride a bicycle, playing a piano and the like) is consolidated in the sleep stage of quick eye movement. Therefore, information implantation and memory editing during sleep also require stimulation delivery and related operations at corresponding sleep stages according to different types of memory.
The sleep brain wave real-time phase monitor 2 is used for monitoring phase information of real-time sleep brain waves;
the sense stimulator 3 is used for providing sense stimulation; the sensory stimulator 3 comprises a presentation device, such as a computer display screen, a book, a sound device, a skill learning device (e.g. a piano, etc.), and can be used for presentation of learning and memory contents; the sensory stimulator 3 further includes stimulation devices such as sounds, odorants, light (or laser) stimulators, motion controllers, vibrators, shakers, tactile stimulators such as press/flick, electrical stimulators, magnetic field stimulators, etc., for generating stimulation cues.
The processor 4 is connected with the sleep stage monitor 1, the sleep brain wave real-time phase monitor 2 and the sensory stimulator 3, and can interactively transmit information, and the processor can also perform information implantation and memory editing work by controlling the sensory stimulator 3.
In a preferred embodiment, the sensory stimulator 3 is configured to provide learning memory content while the subject is awake and also to provide memory inducing cues while the subject is asleep;
the learning memory content comprises two parallel combinations, namely a combination of A and a first clue and a combination of B and a second clue;
the memory-inducing thread includes a paired coupling of thread one and thread two.
In the application, both a and B are high-level semantic information, which may be specifically events, objects, emotions or social attitudes, and the combination of a and the first clue means that a and the first clue form a pair to establish a memory association, so that when the clue is presented to a user, the memory of the user about a is automatically activated. The combined mode can provide the information of A and the stimulus of the first clue for the user at the same time, and can also present the first clue and the first clue in sequence like the conditional reflex forming mode. For example, if a is something, for example, an animal "dog", and the clue is a rose fragrance, the sensory stimulator 3 provides a learning and memory content that provides a picture of the dog and a rose fragrance at the same time. The a and B may also be some or some kind of event, such as an earthquake, violence, or other stress event, for example, a picture, sound, video clip, etc. of an earthquake or violence scene.
The a and B may also be emotionally related, such as positive emotions embodied by, for example, positive and nice scene pictures, providing beautiful landscape pictures, architectural pictures, human pictures, etc. according to the taste of a specific user, or allowing the user to gain positive emotions through monetary rewards, i.e., informing the user in advance, and rewarding him with money later if a specific picture is seen; negative emotions are manifested, for example, by unpleasant odors, by subjecting the user to an ethical range of unpleasant odor stimuli to allow the user to receive negative emotions, or to give the user objectionable scene pictures, such as garbage pictures, ruin pictures, and the like, which can produce negative, negative emotions.
It should be emphasized that in practical applications, a and B may correspond to a plurality of high-level semantics at the same time, and correspondingly, the first clue and the second clue correspond to a plurality of stimuli, i.e. a plurality of memories are edited at the same time.
The application time/number of times required for each kind of learning memory content is determined according to the learning effect, and the goal is to enable the user to quickly and automatically generate the A or B in mind when presenting the first or second clues to the user. For example, if the clue is rose fragrance and A is things dog, the learning success condition is that when the user smells rose fragrance, the related concept or picture of the dog just learned is automatically presented immediately in consciousness. The judgment condition/judgment method for applying the learning memory content can be stopped, and also can be flexibly set according to the use requirement, for example, a semantic initiation paradigm (printing) can be utilized, if a user smells a rose fragrance, the semantic concept of the dog can be initiated, which shows that the semantic processing of the subsequent dog is remarkably accelerated. This indicates that the application of learning memory can be stopped as soon as a thread and a successfully form a semantic association.
In this process of providing the learning memory contents, there is no sequential relationship between the two parallel combinations in principle, but the interval time between the two combinations needs to be as long as possible without affecting the memory, so as not to cause confusion between the two parallel combinations. The ideal time interval is to study a combination in the morning, a combination in the afternoon, and to perform memory editing or information embedding operation during sleep in the evening.
In a preferred embodiment, the processor 4 comprises a sense stimulation module 41, the sense stimulation module 41 is connected with the sense stimulator 3 and can transmit information to each other in real time, and the sense stimulation module 41 is used for setting parameters of the cue stimulation, storing the parameters of the cue stimulation and inputting the parameters of the cue stimulation into the sense stimulator 3.
The parameters of the cue stimulus refer to the parameters of the cue I and the cue II, and comprise the parameters of the cue I and the cue II when learning the memory content, and also comprise the parameters of the cue I and the cue II when providing the cue inducing the memory. In addition, the parameter for learning the memory content and the parameter for providing the clue for inducing the memory are set independently of each other, and the two are generally the same, but are allowed to be different.
Preferably, the cue stimuli include sound, scent, light, motion, tactile information, temperature sensing information, electricity, magnetic field, and the like;
when the cue stimulus is sound, the parameters of the corresponding cue stimulus include sound size, sound frequency, sound content, duration, timbre, etc.;
when the clue stimulus is odor, the corresponding parameters of the clue stimulus comprise components of substances generating the odor, the proportion of the components, the delivery mode of the odor, the ambient temperature when the odor is delivered, the ambient humidity when the odor is delivered, the ambient wind speed when the odor is delivered and the like;
when the cue stimulus is light, the parameters of the corresponding cue stimulus comprise the frequency, intensity, presentation time point, presentation duration, presentation form and the like of the light;
when the cue stimulus is motion, the corresponding parameters of the cue stimulus comprise the direction, acceleration, speed and duration of the motion; the exercise refers to using some external force to make a certain part of the human body or the whole body generate exercise, such as shaking the hands.
When the cue stimulus is tactile information, parameters of the corresponding cue stimulus include pressure intensity of the applied tactile information, stimulus frequency, stimulus form (such as vibration, pressing, stroking, etc.); the tactile information is a tactile stimulation of a certain part or the whole body of a human body by using a device, for example, a vibration stimulation of a wrist by using a hand ring.
When the cue stimulation is electric, the parameters of the corresponding cue stimulation include electric voltage, electric current, stimulation form (such as direct current, alternating current waveform, frequency and the like), stimulation time, frequency and position;
when the cue stimulus is a magnetic field, the parameters of the corresponding cue stimulus include the strength, form, stimulus time, and stimulus location of the magnetic field.
In a preferred embodiment, the processor 4 includes a thread pairing and coupling module 42, which is used to pair and couple two independent threads, so as to induce a new association between two separate memories corresponding to the two threads, thereby forming a new concept in the subconscious use of the object.
The cue pairing and coupling module 42 is connected with the sensory stimulator 3 and is used for setting, storing and inputting time information into the sensory stimulator 3; the time information comprises respective duration/presentation time of each of two threads in the thread group, interval duration of paired coupling of a thread one and a thread two in the thread group, and interval duration between two adjacent thread groups. The thread group refers to two threads forming a coupled pairing, namely a combination of a thread one and a thread two.
The respective stimulation time of each of the first thread and the second thread is 500-3000 milliseconds; it may be adjusted according to the type of stimulus of the cues, such as scent cues that can be presented for a longer time, for example 1000-3000 milliseconds.
The interval duration of the paired coupling of the first thread and the second thread in the thread group is 0-2000 milliseconds; the adjustment may be made according to the stimulation type of the cue, and preferably, the interval duration takes 0, i.e., there is no interval duration. When the first clue and the second clue belong to the same kind of stimulus, the interval duration is 0, so that the operation is easy, when the first clue and the second clue belong to different kinds of stimuli, the interval duration is difficult to be 0, and the interval duration is controlled within 2000 milliseconds, so that an effective clue group can be formed.
The time interval between two adjacent clue groups is more than 8000 milliseconds, but cannot be too long, so that the reduction of the relevance of the two clues in the clue groups is avoided, and the adjustment can be carried out according to the stimulation types of the clues. In a preferred embodiment, the processor 4 comprises a cue delivery time control module 43, the cue delivery time control module 43 is connected with the sleep brain wave real-time phase monitor 2 and the sensory stimulator 3, and determines the delivery time of the memory-inducing cue according to the phase information of the sleep brain wave, wherein the delivery time has a large influence on the final result of information implantation and memory editing, and different delivery times have completely different corresponding final results.
Preferably, the thread delivery timing control module 43 controls the delivery of the first of the memory-inducing threads during the ascending phase of the sleep slow wave of the subject.
The invention also provides a method for information implantation and memory editing in sleep, which is realized by the system for information implantation and memory editing in sleep.
Preferably, the method comprises the steps of:
step 1, providing learning and memory contents through a sense stimulator 3 when a user is awake; the learning memory content comprises two parallel combinations, namely a combination of A and a first clue and a combination of B and a second clue;
step 2, providing clues for inducing memory through the sensory stimulator 3 when the user sleeps; the memory-inducing thread comprises a paired coupling of thread one and thread two.
Further preferably, in the step 1, the learning memory content is continuously provided until the user successfully forms the combination of a and the first thread and the combination of B and the second thread. When the combination of a and thread one and the combination of B and thread two are successfully formed, step 1 is considered to be completed, and step 2 can be further performed.
In step 2, the respective stimulation time of each of the first and second threads is 500-3000 milliseconds; the interval duration of the couple of thread one and thread two in the thread group is 0-2000 ms, preferably 0ms, namely no interval; the time interval between two adjacent thread groups is more than 8000 milliseconds.
In step 2, the strength of the memory-inducing cue depends on the physical constitution of the user, and is determined so that the user can clearly feel in a quiet/awake state, but is ensured so that the user is not aroused from a sleep state.
After a plurality of times of presentation of the clue group, if the user is awake and a stronger semantic association appears between the A and the B, the user can be considered to have been successfully executed in the step 2 and to have been embedded with a message, and then the memory editing is completed.
For example, if A is the thing "dog", B is the reward represented by "money", the clue is the smell of rose, and the clue is the sound "drip". In step 1, the rose and dog pairings are learned separately, as well as the semantic pairing of sound "drips" and prizes. The checking method for judging the learning success can be a 'semantic starting paradigm', namely, when a user smells the smell of roses, the semantic concept of the dog can be processed more quickly; when the user hears the "drip" sound, he can feel more pleasurable (reward related), which indicates that step 1 has been successfully completed. Then, the user performs step 2 during sleep, matches and presents the rose smell and the sound "drip", presents for a plurality of times, and performs a test after the user wakes up, for example, a "semantic starting paradigm" test, if the attitude (implicit attitude) of the dog is more positive (more preferred) than before sleep, which indicates that step 2 is successfully performed, and the user is successfully performed with the memory editing of the dog (the dog becomes more preferred).
The embodiment is as follows:
the purpose is as follows: by means of the method for information implantation and memory editing in sleep, the using object is subjected to memory editing in sleep, the concept and reward of the cat are connected, and accordingly the positive attitude of the using object on the cat is improved, namely the cat is more liked after the user wakes up.
Usage object information participating in the study: since sleep interruption occurs during sleep, or deep sleep cannot be achieved, or the time is too short to present sound stimulation, and the like, about 50% of redundancy is required for the use of the subject. Healthy young people 106 were recruited as subjects for use in this experiment, aged 19-23 years, with an average age of 21.3 years. 53 men and 53 women. Are all self-reported to be free of sleep disturbance.
Step 1, counting the baseline attitudes of all used objects to cats and dogs;
the baseline attitudes of the user on cats and dogs were examined using the Implicit Association Test (IAT).
Procedure of IAT test: two types of material are presented simultaneously in the center of the screen, as shown in fig. 3, type one material is a picture of cat or a picture of dog or a vocabulary of positive words including "lovely, lambertian, friendly, happy" or negative words including "offensive, inlaying, dirty, annoying".
The type two materials are descriptive characters respectively positioned at two sides of the screen, and mainly have two cases, wherein the first case comprises a combination of a positive word and a dog positioned at the left side of the screen and a combination of a negative word and a cat positioned at the right side of the screen; the second case includes a combination of "negative words" and "dogs" on the left side of the screen, and a combination of "positive words" and "cats" on the right side of the screen.
The object is required to be used to judge whether the type one material belongs to the left part or the right part in the type two material by pressing the left and right keys. The positive and negative differences in attitudes of the subjects were judged by recording the accuracy and reaction time of the reaction of the subjects and comparing the difference in accuracy and reaction time between the two conditions.
The specific baseline attitude determination process is as follows: A.G. Greenwald, D.E. McGhe, J.L.K. Schwartz, measuring index visual factors in experimental diagnosis, the experimental association test, J.Pers. Soc. Psychol. 74, 1464-1480 (1998).
And 2, providing learning and memory for the user.
Learning and memorizing contents one: setting A as an animal 'dog', specifically displaying the animal 'dog' to a user through a picture, wherein a first clue is a sound stimulus, specifically a simple sound 'sting'; b is a negative emotion picture representing punishment, and a clue two is another sound stimulus, particularly a simple sound "dong";
learning and memorizing contents two: setting A as an animal 'cat', specifically displaying the animal 'cat' to a user through a picture, wherein a first clue is a sound stimulus, specifically a sound 'cue 3'; b is a positive emotion picture representing reward, and a clue two is another sound stimulus, specifically a sound "cue4";
when the user is awake, selecting the LED visual stimulation display and the auditory stimulation loudspeaker as the sensory stimulators to provide the first learning memory content and the second learning memory content for the user;
two sets of learning and memory contents are provided for each of the using objects by the sensory stimulator so that the using objects form a binding relationship shown in fig. 2.
When two groups of learning and memory contents are provided at the same time, the combination of the A and the first clue can be synchronously learned, and after the learning is finished, the combination of the B and the second clue can be synchronously learned.
The combination of simultaneous learning a and cue one, learning a matching combination of cat, dog and voice:
informing the user that the sound and the animal pairing relationship are to be learned: the "bite" corresponds to a dog and the "cue3" corresponds to a cat. Storing 20 pictures of a cat and a dog respectively, and storing the sound of 'ding' and 'cue 3'; firstly, randomly playing one sound of 'ding' and 'cue 3', wherein the playing time is 1000ms, after the playing is finished, 500ms is used for extracting and displaying pictures of cats or dogs by a display, the pictures disappear after 500ms, timing is started when the pictures appear, whether a key is pressed or not and the pressing time are recorded, after 5000ms after the pictures disappear, a learning period is finished, a next learning period is directly started, one sound of 'ding' and 'cue 3' is randomly played, the playing time is 1000ms, after 500ms, the pictures of cats or dogs are extracted and displayed by the display, \8230, 8230. The pictures of cats are presented and the sound of "cue3" is provided as the correct combination, the pictures of dogs are presented and the sound of "ding" is provided as the correct combination, the pictures of cats are presented and the sound of "ding" is provided as the wrong combination, the pictures of dogs are presented and the sound of "cue3" is provided as the wrong combination, and in the randomly provided process, the correct combination and the wrong combination account for 50 percent. And (3) requiring the object to quickly press a key to judge whether the picture is matched with the sound, namely pressing the key if the picture is matched with the sound, not pressing the key if the picture is not matched with the sound, and circularly presenting all the learning periods, wherein the judgment is correct within 20 continuous learning periods of the object, and the key reaction (starting timing after the picture appears) is less than 500ms, then stopping the program, and at this time, the object is used to skillfully establish the matching relationship between the sound and the cat and dog.
The combination of simultaneous learning B and clue two, i.e. learning matching combination of learning reward, penalty and voice:
informing the user that the subject wants to learn the matching relationship between voice and positive emotion and negative emotion: "dong" corresponds to negative mood picture stimulus, and "cue4" corresponds to positive mood picture. Storing 20 positive emotion pictures and 20 negative emotion pictures respectively, and storing the sounds of 'dong' and 'cu 4'; firstly, one sound of ' dong ' and ' cue4 ' is played randomly, the playing time is 1000ms, 500ms after the playing is finished is extracted by a display and presents a positive emotion picture or a negative emotion picture, the picture disappears after 500ms, timing is started when the picture appears, whether a key is pressed or not and the pressing time are recorded, after 5000ms after the picture disappears, a learning period is finished, a next learning period is directly started, one sound of ' dong ' and ' cue4 ' is played randomly, the playing time is 1000ms, 500ms after the playing is finished is extracted by the display and presents a positive emotion picture or a negative emotion picture ' \8230, 8230and the like. The picture with positive emotion is presented and the sound of "cue4" is provided as a correct combination, the picture with negative emotion is presented and the sound of "clang" is provided as a correct combination, the picture with positive emotion is presented and the sound of "clang" is provided as an incorrect combination, the picture with negative emotion is presented and the sound of "cue4" is provided as an incorrect combination, and in the process of random provision, the correct combination and the incorrect combination account for 50%. And (3) requiring the use of an object quick key to judge whether the picture is matched with the sound, namely, pressing the key if the picture is matched with the sound, not pressing the key if the picture is not matched with the sound, and circularly presenting all learning periods, wherein the judgment is correct within 20 continuous learning periods of the use object, and the key reaction (starting timing after the picture appears) is less than 500ms, the program is stopped, and at the moment, the use object is skillfully established with the matching relation between the sound and the positive emotion and the negative emotion.
According to the likes and dislikes of the using objects, specific positive emotion pictures and negative emotion pictures are correspondingly set for each using object, the positive emotion pictures comprise one or more of beautiful landscape pictures, magnificent architectural pictures and beautiful character pictures, and the negative emotion pictures comprise one or more of dirty and messy junk pictures, ruined ruin pictures and ugly character pictures.
And 3, providing clues for inducing memory for the using object.
In the case of the subject successfully completing the four pre-sleep learning tasks described above, the subject started going to bed at about 22 a day before the subject was used and connected to the sleep stage monitor, sleep brain wave real-time phase monitor and sensory stimulator before sleep. When the detected object enters the ascending stage of the sleep slow wave, the sound clues inducing memory are provided for the user by the auditory stimulation loudspeaker as a sense stimulator.
In the process of providing the clue for inducing memory, the clue I is a sound clue 'cue 3', the clue II is a sound clue 'cue 4', the interval duration of pairing coupling is 0 millisecond, namely, the sound clues 'cue 3' and 'cue 4' are presented continuously without intervals; the time interval between two adjacent cue groups coupled in pair is 8000 milliseconds.
The memory-inducing clues are continuously provided in the sleep slow wave stage, and the time provided by each user object is about 30 minutes on average, namely, about 180 paired and coupled clue groups are presented for each user object. The number of the used objects of the first thread is consistent with that of the used objects of the second thread; white noise is always played during the sleep period of the subject.
And 4, after the sleep is finished, carrying out baseline attitude detection on the cat and the dog on each using object. The detection method is exactly the same as that in the step 1.
The final result obtained is: score of post-sleep comparative IAT: analysis of variance of cat and dog (cat vs. dog) x sleep (before sleep vs. after sleep) was used. Specifically, the following steps are carried out: due to the setting of the IAT test in the experiment, if the attitude of the dog is increased, the dog is more preferred, and if the attitude of the dog is decreased, the dog is less preferred. But the exact opposite for cats: a decrease in the value indicates a preference, an increase in the value indicates a decrease in the preference.
The number of people who successfully completed the sleep experiment at the end of the experiment was 80. Specific data for the experiment are shown in the following table as experimental result data (pre-and post-sleep comparison of preference for cats and dogs):
Figure 907452DEST_PATH_IMAGE001
results of anova showed significant sleep and cat/dog interaction, F (1,79) =4.307, p =0.041, and biased Eta square =0.052. Further simple analysis showed that the subjects had no significant change in dog attitude but significantly improved cat attitude after sleep memory editing, as shown in fig. 4, where the ordinate is attitude preference, in this application, for cats, the lower the score, the higher the likeability for cats; asterisks indicate significant differences in interaction, p <0.05. This figure 4 illustrates successful memory operations during sleep.
Comparative example 1
Selecting a user object basically consistent with the physical condition, age and reading history in the embodiment, and adopting the information implantation and memory editing method consistent with the embodiment, wherein the difference is that in the process of providing the memory-inducing clue, the clue I and the clue II are not coupled in pairs, but are presented in groups in a centralized way. The user is given first a first clue (audio clue cue 3) of three minutes, with a duration of 1 second and 9 seconds between every two clues. After a total of 18 threads one are presented in three minutes, then thread two (sound thread cue 4) is presented in a mode similar to thread one, i.e. thread two is 1 second long, and every two threads are 9 seconds apart. After a total of 18 second threads were presented for three minutes, then 3 minutes of first thread was presented, and thus thread one and thread two were presented alternately until the subject exited the sleep slow wave stage and stopped providing memory-inducing threads.
The final result obtained is: the number of people who finally successfully completed the sleep experiment was 78. Score of post-sleep comparative IAT: analysis of variance of cat and dog (cat vs. dog) x sleep (before sleep vs. after sleep) was used. Comparative example 1 results specific data are shown in the following table as experimental results data (pre-and post-sleep comparison of preference for cats and dogs):
Figure DEST_PATH_IMAGE002
results of anova showed that the interaction between sleep and cat/dog was not significant, F (1,77) =0.142, p =0.708, and the off Eta square =0.003, as shown in fig. 5, which shows the attitude change for cat and dog before and after sleep, where the ordinate is attitude preference, in this comparison, no statistically significant change was found before and after sleep, using the attitudes of subjects for cat and dog. Fig. 5 illustrates that no successful memory manipulations were performed during sleep and no significant differences were found between the attitudes of cat and dog before and after sleep manipulation.
And (4) conclusion: under the present operating conditions, the preference of the subjects for cats after sleep did not differ significantly from the preference before sleep, which is similar to the preference of the subjects for dogs who did not operate. The explanation is that the operation during sleep of the subject of use does not play a role with respect to the attitude of the subject of use to the cat. The comprehensive analysis of the example and comparative example 1 shows that the pairing and coupling of two threads, namely, thread one and thread two, is a key operation for successfully forming semantic association.
The fact that two threads set in the application form a thread group coupled in pairs to be delivered during slow wave sleep is explained through univariate comparison, and the key operation that semantic association can be established between the A things and the B things so as to form new information implantation and memory editing effects has important influence on final results.
Comparative example 2
<xnotran> , , , , ( cue 3) ( cue 4) 8000 ; </xnotran> The time interval between the pair-coupled thread groups is 8000 milliseconds.
The final results obtained were: the number of people who finally successfully completed the sleep experiment was 82. Score of post-sleep comparative IAT: analysis of variance of cat and dog (cat vs. dog) x sleep (before sleep vs. after sleep) was used. Specific data for the experiment are shown in the following table as experimental result data (pre-and post-sleep comparison of preference for cats and dogs):
Figure 735511DEST_PATH_IMAGE003
results of anova showed no significant sleep and cat/dog interaction, F (1,81) =0.002, p =0.967, and partial Eta square <0.001. As shown in fig. 6, indicating that no successful memory manipulation was performed during sleep, no significant difference was found between the attitudes of cats and dogs before and after sleep manipulation.
And (4) conclusion: under the present operating conditions, the preference of the subjects for cats after sleep did not differ significantly from the preference before sleep, which is similar to the preference of the subjects for dogs who did not operate. The explanation is that the operation during sleep of the subject of use does not play a role with respect to the attitude of the subject of use to the cat. The example and comparative example 2 comprehensively analyze and show that the pairing coupling of the thread one and the thread two at proper time intervals is a key operation capable of forming a thread group and successfully forming semantic association, and if the time interval between the two threads is too long, the pairing combination is not formed, and the A and B behind the threads cannot be effectively associated.
The method for delivering the target object during slow wave sleep by using univariate comparison to show that two threads set in the application form a pair-coupled thread group is a key operation capable of establishing semantic association between two objects, namely, a thing A and a thing B, so as to form new information implantation and memory editing effects, and the time interval between the two threads cannot be too long, for example, the sound thread interval in comparative example 2 exceeds 8000 milliseconds, so that semantic association cannot be performed on the A and the B behind the threads, thereby having important influence on the final result.
The present invention has been described above in connection with preferred embodiments, but these embodiments are merely exemplary and merely illustrative. On the basis of the above, the invention can be subjected to various substitutions and modifications, and the substitutions and the modifications are all within the protection scope of the invention.

Claims (4)

1. A system for information implantation and memory editing during sleep, the system comprising
A sleep stage monitor (1) for monitoring sleep stages in real time;
a sleep brain wave real-time phase monitor (2) for monitoring phase information of real-time sleep brain waves;
a sensory stimulator (3) for providing sensory stimuli;
the processor (4) is connected with the sleep stage monitor (1), the sleep brain wave real-time phase monitor (2) and the sensory stimulator (3) and is used for executing information implantation and memory editing work by controlling the sensory stimulator (3);
the sense stimulator (3) is used for providing learning memory content when the user is awake and also used for providing clues for inducing memory when the user is asleep;
the learning and memory content comprises two parallel combinations: a and thread one, B and thread two; a and B are high-level semantic information;
the memory-inducing thread comprises a paired coupling of a thread one and a thread two;
the processor (4) comprises a thread pairing coupling module (42),
the cue pairing coupling module (42) is connected with the sensory stimulator (3) and is used for setting, storing and inputting time information into the sensory stimulator (3);
the time information comprises the respective duration/presentation time of each time of two threads in the thread group, the interval duration of paired coupling of a thread I and a thread II in the thread group, and the interval duration between two adjacent thread groups;
the thread group refers to two threads forming pairing coupling;
the respective stimulation time of each of the first thread and the second thread is 500-3000 milliseconds;
the interval duration of the paired coupling of the thread one and the thread two in the thread group is 0-2000 milliseconds;
the time interval between two adjacent clue groups is more than 8000 milliseconds;
the processor (4) comprises a thread delivery time control module (43),
the clue putting time control module (43) is connected with the sleep brain wave real-time phase monitor (2) and the sensory stimulator (3), and the putting time of clues inducing memory is determined according to the phase information of the sleep brain waves;
the thread delivery time control module (43) controls the delivery of the first thread of the memory-inducing threads in the ascending phase of the sleep slow wave.
2. The system for information implantation and memory editing during sleep as claimed in claim 1,
the processor (4) comprises a sensory stimulation module (41),
the sensory stimulation module (41) is connected with the sensory stimulator (3) and is used for setting, storing and inputting parameters of the cue stimulation into the sensory stimulator (3);
the clue stimulus comprises sound, smell, light, motion, tactile information, temperature sensing information, electricity and magnetic field;
when the cue stimulus is sound, the parameters of the corresponding cue stimulus comprise sound size, sound frequency, sound content, duration and timbre;
when the clue stimulus is a smell, the corresponding parameters of the clue stimulus comprise the components of a substance generating the smell, the proportion of each component, the delivery mode of the smell, the ambient temperature when the smell is delivered, the ambient humidity when the smell is delivered and the ambient wind speed when the smell is delivered;
when the cue stimulus is light, the parameters of the corresponding cue stimulus comprise the frequency, the intensity, the presenting time point, the presenting duration and the presenting form of the light;
when the clue stimulus is motion, the parameters of the corresponding clue stimulus comprise the direction, the acceleration, the speed and the duration of the motion;
when the cue stimulus is tactile information, the parameters of the corresponding cue stimulus include pressure intensity, stimulus frequency, stimulus form of the applied tactile information;
when the cue stimulation is electricity, the parameters of the corresponding cue stimulation comprise voltage and current of the electricity, a stimulation form, stimulation time, stimulation frequency and stimulation position;
when the cue stimulus is a magnetic field, the parameters of the corresponding cue stimulus include the strength, form, time of stimulation, and location of stimulation of the magnetic field.
3. A method for information implantation and memory editing during sleep, which is implemented by the system for information implantation and memory editing during sleep according to any one of claims 1-2.
4. The method for information implantation and memory editing during sleep as claimed in claim 3,
the method comprises the following steps:
step 1, providing learning and memory contents through a sense stimulator (3) when a user is awake; the learning and memory content comprises two parallel combinations: a and thread one, B and thread two;
step 2, providing clues for inducing memory through the sensory stimulator (3) when the user sleeps; the memory-inducing thread includes a paired coupling of thread one and thread two.
CN202211442674.8A 2022-11-18 2022-11-18 System and method for information implantation and memory editing in sleep Active CN115634356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211442674.8A CN115634356B (en) 2022-11-18 2022-11-18 System and method for information implantation and memory editing in sleep

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211442674.8A CN115634356B (en) 2022-11-18 2022-11-18 System and method for information implantation and memory editing in sleep

Publications (2)

Publication Number Publication Date
CN115634356A CN115634356A (en) 2023-01-24
CN115634356B true CN115634356B (en) 2023-03-31

Family

ID=84948525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211442674.8A Active CN115634356B (en) 2022-11-18 2022-11-18 System and method for information implantation and memory editing in sleep

Country Status (1)

Country Link
CN (1) CN115634356B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112384131A (en) * 2018-05-10 2021-02-19 皇家飞利浦有限公司 System and method for enhancing sensory stimuli delivered to a user using a neural network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8382484B2 (en) * 2011-04-04 2013-02-26 Sheepdog Sciences, Inc. Apparatus, system, and method for modulating consolidation of memory during sleep
US20140057232A1 (en) * 2011-04-04 2014-02-27 Daniel Z. Wetmore Apparatus, system, and method for modulating consolidation of memory during sleep
US10328234B2 (en) * 2013-01-29 2019-06-25 Koninklijke Philips N.V. System and method for enhanced knowledge consolidation by sleep slow wave induction and sensory context re-creation
WO2016102602A1 (en) * 2014-12-22 2016-06-30 Icm (Institut Du Cerveau Et De La Moelle Épinière) Method and device for enhancing memory consolidation
EP3305189B1 (en) * 2016-10-03 2022-09-21 Teledyne Scientific & Imaging, LLC System and method for targeted memory enhancement during sleep

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112384131A (en) * 2018-05-10 2021-02-19 皇家飞利浦有限公司 System and method for enhancing sensory stimuli delivered to a user using a neural network

Also Published As

Publication number Publication date
CN115634356A (en) 2023-01-24

Similar Documents

Publication Publication Date Title
Forbes Yoga for emotional balance: Simple practices to help relieve anxiety and depression
JP6963668B1 (en) Solution provision system
Zarren et al. Brief cognitive hypnosis: Facilitating the change of dysfunctional behavior
Sutton The air between two hands: Silence, music and communication
WO2013188557A1 (en) Producing audio output for music therapy
CN113574608A (en) Method and system for generating a treatment course
Bolstad Resolve: A new model of therapy
CN115634356B (en) System and method for information implantation and memory editing in sleep
Blagrove et al. The science and art of dreaming
Green Cognitive-Behavioral Hypnotherapy for Smoking Cessation: A Case Study in a Group Setting.
KR20020008683A (en) Combination internet based online and offline brain power developing system
Barber Responding to ‘Hypnotic’Suggestions: An Introspective Report
Budzynski The clinical guide to sound and light
Kaelen The use of music in psychedelic therapy
Suedfeld et al. Flow of consciousness in restricted environmental stimulation
Frank-Schwebel Developmental trauma and its relation to sound and music
Rechardt On musical cognition and archaic meaning schemata
CN114652938B (en) Intelligent closed-loop regulation stimulation system for promoting sleep and use method
Raduga et al. Real-time transferring of music from lucid dreams into reality by electromyography sensors.
WO2022118955A1 (en) Solution providing system
US20240307652A1 (en) Dream directing system and method for using the same
Windholz Hypnosis and inhibition as viewed by Heidenhain and Pavlov
Moffitt et al. The creation of self: Self-reflectiveness in dreaming and waking
Dale Hypnosis and Education.
Istók Cognitive and neural determinants of music appreciation and aesthetics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant