CN115886817A - Psychological state detection method and device - Google Patents
Psychological state detection method and device Download PDFInfo
- Publication number
- CN115886817A CN115886817A CN202211432954.0A CN202211432954A CN115886817A CN 115886817 A CN115886817 A CN 115886817A CN 202211432954 A CN202211432954 A CN 202211432954A CN 115886817 A CN115886817 A CN 115886817A
- Authority
- CN
- China
- Prior art keywords
- head
- psychological
- pixel point
- resonance frequency
- frequency
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 claims abstract description 32
- 241000282414 Homo sapiens Species 0.000 claims abstract description 18
- 210000000056 organ Anatomy 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 15
- 210000003128 head Anatomy 0.000 claims description 67
- 238000004458 analytical method Methods 0.000 claims description 21
- 210000001747 pupil Anatomy 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 12
- 230000006996 mental state Effects 0.000 claims description 12
- 238000013135 deep learning Methods 0.000 claims description 11
- 238000006073 displacement reaction Methods 0.000 claims description 9
- 210000001508 eye Anatomy 0.000 claims description 9
- 210000005252 bulbus oculi Anatomy 0.000 claims description 6
- 230000029058 respiratory gaseous exchange Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 2
- 210000001331 nose Anatomy 0.000 description 7
- 206010016256 fatigue Diseases 0.000 description 5
- 239000000126 substance Substances 0.000 description 5
- 208000019901 Anxiety disease Diseases 0.000 description 4
- 230000036506 anxiety Effects 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000000241 respiratory effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000001720 vestibular Effects 0.000 description 2
- 208000019914 Mental Fatigue Diseases 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 208000035475 disorder Diseases 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000003183 myoelectrical effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000008433 psychological processes and functions Effects 0.000 description 1
Images
Landscapes
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The application discloses a psychological state detection method and a device, wherein the method comprises the following steps: acquiring a video stream of a human head, and acquiring a multi-frame head image from the video stream; processing the multi-frame head image, positioning outlines of a plurality of organs related to the head, and acquiring the vibration frequency of each pixel point in the outlines; according to the vibration frequency of each pixel point, time sequence change data of the resonance frequency related to the head in a preset time is calculated; and outputting a plurality of psychological states and the grades of the psychological states according to the time sequence change data. The time sequence change data of the resonance frequency in the preset time is calculated according to the vibration frequency of each pixel point in the outline of the organs related to the head, and then the psychological state can be objectively and accurately detected, so that the method and the device have the advantages of wide application range, convenience in use and high precision.
Description
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a psychological state detection method and device.
Background
Psychology refers to the process and result of combing the intrinsic symbolic activities of a person. Specifically, the expression form of the subjective reflection psychology of the living beings on the objective substance world is called a psychological phenomenon, and comprises a psychological process and psychological characteristics, and the psychological activities of people have a process of occurrence, development and disappearance. Human beings have different outward expressions under different psychological states (such as fatigue, anxiety, psychological stress, etc.). For example, under the condition of moderate psychological pressure, the working efficiency is higher; when the psychological pressure is too low or too high, the working efficiency may be reduced.
Currently, the mode of determining the mental state is to perform single data detection first and then use the detected data to calculate the mental state, but the mode is often inaccurate.
Content of application
The embodiment of the application aims to provide a psychological state detection method and a psychological state detection device, so as to overcome the defect that the prior art cannot accurately detect the psychological state.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, a method for detecting a psychological state is provided, which includes the following steps:
acquiring a video stream of a human head, and acquiring a multi-frame head image from the video stream;
processing the multi-frame head image, positioning outlines of a plurality of organs related to the head, and acquiring the vibration frequency of each pixel point in the outlines;
calculating time sequence change data of the resonance frequency related to the head within preset time according to the vibration frequency of each pixel point;
and outputting a plurality of psychological states and the grades of the psychological states according to the time sequence change data.
In a second aspect, there is provided a mental state detection apparatus comprising:
the acquisition module is used for acquiring a video stream of the head of a human body and acquiring a multi-frame head image from the video stream;
the processing module is used for processing the multi-frame head images, positioning outlines of a plurality of organs related to the heads, and acquiring the vibration frequency of each pixel point in the outlines;
the calculation module is used for calculating time sequence change data of the resonance frequency related to the head within preset time according to the vibration frequency of each pixel point;
and the output module is used for outputting a plurality of psychological states and the grades of the psychological states according to the time sequence change data.
According to the method and the device, the time sequence change data of the resonance frequency in the preset time is calculated according to the vibration frequency of each pixel point in the outline of the organs related to the head, and then the psychological state can be objectively and accurately detected, so that the method and the device have the advantages of wide application range, convenience in use and high precision.
Drawings
Fig. 1 is a flowchart of a psychological state detection method according to an embodiment of the present application;
fig. 2 is a diagram of an implementation of a mental state detection method according to an embodiment of the present disclosure;
fig. 3 is a specific implementation diagram of a non-contact mental state detection dedicated device provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a psychological state detecting device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The technology for detecting the non-contact psychological state based on the video is mainly applied to micro expression and vibration images, the micro expression detection mainly comprises emotional emotions such as anger and fear, and the vibration images are obtained by deducing a mathematical formula without introducing deep learning to continuously optimize the detection accuracy. Any substance has the wave particle duality, that is, any substance in the universe vibrates from living things to non-living things every moment. Furthermore, each substance has its own inherent natural frequency, otherwise known as the resonant frequency. Organisms may have multiple resonant frequencies. The organism itself is a fine and weak natural electromagnetic wave vibration system, and various organs and tissues such as the brain, the heart and the like have specific vibration frequencies. The psychological state can be analyzed more accurately by detecting weak resonance frequencies of a plurality of organs of the head, performing deep learning and establishing a multi-resonance-frequency psychological analysis model. However, the method of obtaining the psychological state by detecting the psychological signal of the organ resonance frequency has been previously implemented using expensive, dedicated medical myoelectric devices.
The embodiment of the application provides a non-contact psychological state detection method and device based on multiple head resonance frequencies, relates to the technical field of artificial intelligence psychological state detection, detects the resonance frequencies of head organs by using a video-based method, and realizes the psychological state detection in a non-contact mode. The detection principle is as follows: each substance has its own natural frequency, otherwise known as the resonant frequency. The organism itself is a fine and weak natural electromagnetic wave vibration system. The detection process is that a camera is used for capturing a human head video, then the video signals are processed frame by frame, the outlines of pupils, eyes, a nose and the head are positioned in a frame image, and the pupil resonance frequency, the eyeball resonance frequency, the respiratory resonance frequency and the head resonance frequency caused by vestibular feedback are calculated through the displacement of pixel points in the outlines of the positions in continuous frame images. The time series data of a plurality of resonance frequencies in a period of time are subjected to deep learning engine association and analysis, and a plurality of psychological states of a human body, such as fatigue, attention, stress, anxiety, depression, interpersonal relationship, tension and the like, can be obtained.
In addition, the psychological state detection device is composed of an information acquisition unit, a resonance frequency calculation unit and a psychological state detection unit, wherein the information acquisition unit acquires a human head video through a camera with 30 frames/second, and 30 frames of head images containing not less than 400 × 400 pixels are obtained every second. The resonance frequency calculation unit identifies the defined contours of the pupil, the eyes, the nose and the head, and calculates the resonance frequency according to the displacement of the pixel points in the contours in the continuous frames. The psychological state detection unit detects the psychological state through a deep learning result.
Specifically, the information acquisition unit comprises a video acquisition module and a video processing module, the video acquisition module realizes acquisition of video streams in the camera, and the video processing module obtains frame images. The resonance frequency calculation unit comprises a pupil, eyes, a nose and head contour positioning module, a contour inner pixel point vibration frequency calculation module, a multi-resonance frequency calculation module and a time sequence change data acquisition module. The psychological state detection unit comprises a psychological state detection module and a deep learning engine module, and the deep learning engine module trains a large amount of data under artificial adjustment such as psychological state feedback and supervised learning to obtain a continuously optimized resonance frequency psychological analysis model. And the psychological state detection module is used for carrying out accurate psychological state detection by using the optimized resonance frequency psychological analysis model.
A psychological state detection method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, a flowchart of a method for detecting a psychological state according to an embodiment of the present application is provided, where the method includes the following steps:
And 102, processing the multi-frame head image, positioning the outlines of a plurality of organs related to the head, and acquiring the vibration frequency of each pixel point in the outlines.
Specifically, the outlines of the pupil, the eye, the nose and the head, and the various pixel points in the outlines can be positioned on each frame of head image according to the human biological characteristics; and calculating the vibration frequency of each pixel point according to the displacement of each pixel point in the continuous head images.
And 103, calculating time sequence change data of the resonance frequency related to the head within preset time according to the vibration frequency of each pixel point.
Specifically, the vibration frequency of each pixel point may be respectively subjected to weighted average calculation to obtain time sequence change data of the pupil resonance frequency, the eyeball resonance frequency, the respiration resonance frequency, and the head resonance frequency within a preset time.
And 104, outputting a plurality of psychological states and the grades of the psychological states according to the time sequence change data.
Specifically, the time-series variation data may be input into a preset multi-resonance frequency psychological analysis model, and a plurality of psychological states and levels of the psychological states may be output.
In this embodiment, deep learning engine association and analysis may be performed to optimize the multi-resonant frequency psychology analysis model.
According to the method and the device, the time sequence change data of the resonance frequency in the preset time is calculated according to the vibration frequency of each pixel point in the outline of the organs related to the head, and then the psychological state can be objectively and accurately detected, so that the method and the device have the advantages of wide application range, convenience in use and high precision.
In the embodiment of the present application, as shown in fig. 2, the specific implementation process includes the following steps:
(1) Firstly, a camera acquires a video stream of a face, the camera needs to be aligned to the front of the face, and the size of the face in the camera is not less than 400 × 400 pixels. The frame rate of the camera is required to reach 30fps, and 30 high-quality face images can be acquired from the video stream every second.
(2) Processing each frame of image, positioning the outlines of pupils, eyes, noses and heads on the frame of image according to the biological characteristics of the human body, and positioning pixel points in the outlines line by line.
(3) And calculating the vibration frequency of each pixel point according to the displacement of the pixel points in the outlines of the pupil, the eyes, the nose and the head in the continuous frame images. For a certain pixel point, no displacement may be found in the adjacent frame image, and the change of the displacement can be found only by calculating after accumulating multiple frames.
(4) And calculating pupil resonance frequency, eyeball resonance frequency, respiratory resonance frequency, head resonance frequency caused by vestibular feedback and the like as accurate as possible according to the weighted average of the vibration frequencies of the related pixel points.
(5) The resonance frequency is also a physiological parameter of the human body, as well as the heart rate, and the change of the resonance frequency with time forms a resonance frequency change graph just like the heart rate variability can be represented by an electrocardiogram. The time series variation data of the above-mentioned multiple resonance frequencies of the head are continuously acquired and calculated for 60 seconds.
(6) The time sequence change data is input into a preset multi-resonance frequency psychological analysis model, and a plurality of psychological state grades are output. The mental states include fatigue, attention, stress, anxiety, depression, interpersonal relationship, tension, etc., wherein fatigue means mental fatigue and attention means attention disorder. Each mental state is divided into four levels, normal, level 1, level 2 and level 3, and the larger the level is, the more problematic the mental state is probably, the larger the level is.
(7) And performing deep learning engine association and analysis, optimizing a resonance frequency psychological analysis model, and replacing the original preset resonance frequency psychological analysis model with the optimized resonance frequency psychological analysis model to ensure that the accuracy of psychological state detection is higher and higher.
Through the technical scheme, the system can give the following psychological abnormalities in a grading manner: fatigue, attention, stress, anxiety, depression, interpersonal relationships, tension, etc., are several of the most interesting psychological indicators in the population of soldiers, students, etc. The system uses the video stream to detect the psychological state, and has the non-contact characteristic, so that the objective state of the testee can be detected.
The embodiment of the present application may further set the computer device as a non-contact psychological state detection dedicated device, as shown in fig. 3, the device operates independently, and a camera with a frame rate of 30fps and a resolution of 720P is built in the device. After the equipment is started, the psychological state detection application is automatically started, and the head video stream collected by the built-in camera is processed by the psychological state detection application to complete the psychological state detection. And if the psychological state detection application is exited, the equipment is automatically shut down. The device is provided with a main screen and an auxiliary screen which are respectively used for displaying different interfaces, the tested person and the operator respectively correspond to different interfaces, and the detected psychological state is unknown to the tested person.
In addition, in order to keep the stability of detection, the system is used offline for a single edition, the integral design is adopted, and the camera is built in the system. The system adopts a double-sided screen design, different contents are displayed on the front screen and the rear screen, and the screen to be tested faces cannot display detection data and detection results, so that negative effects on the tested screen are avoided. Compared with a psychological scale, the system can perform multiple tests on the same test subject. Compared with wearable equipment such as electroencephalogram and electrocardio, the system is more convenient and flexible to use.
The embodiment of the application adopts a non-contact and non-inductive psychological state detection method, and realizes the non-contact psychological state detection based on the video by calculating the inherent vibration frequency of the human organ, which is a non-disguisable method, so that the non-contact psychological state detection based on the video is more objective and accurate, and the method has the advantages of wide application range, convenience in use and high precision.
As shown in fig. 4, a mental state detection apparatus in an embodiment of the present application includes:
the acquisition module 410 is configured to acquire a video stream of a human head, and acquire a multi-frame head image from the video stream.
The processing module 420 is configured to process the multiple frames of head images, locate contours of multiple organs related to the head, and obtain a vibration frequency of each pixel point in the contours.
Specifically, the processing module 420 is specifically configured to locate contours of a pupil, an eye, a nose, and a head, and each pixel point in the contours, on each frame of head image based on human biological features; and calculating the vibration frequency of each pixel point according to the displacement of each pixel point in the continuous head images.
The calculating module 430 is configured to calculate time sequence change data of the head-related resonance frequency within a preset time according to the vibration frequency of each pixel.
Specifically, the calculating module 430 is specifically configured to perform weighted average calculation on the vibration frequency of each pixel point, so as to obtain time sequence change data of the pupil resonance frequency, the eyeball resonance frequency, the respiratory resonance frequency, and the head resonance frequency within a preset time.
The output module 440 is configured to output multiple psychological states and levels of the psychological states according to the time sequence change data.
Specifically, the output module 440 is specifically configured to input the time-series change data into a preset multi-resonance frequency psychological analysis model, and output a plurality of psychological states and levels of the psychological states.
In addition, the psychological state detecting device further includes:
and the optimization module is used for performing deep learning engine association and analysis and optimizing the multi-resonance-frequency psychological analysis model.
According to the method and the device, the time sequence change data of the resonance frequency in the preset time is calculated according to the vibration frequency of each pixel point in the outline of the organs related to the head, and then the psychological state can be objectively and accurately detected, so that the method and the device have the advantages of wide application range, convenience in use and high precision.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A psychological state detection method is characterized by comprising the following steps:
acquiring a video stream of a human head, and acquiring a multi-frame head image from the video stream;
processing the multi-frame head image, positioning outlines of a plurality of organs related to the head, and acquiring the vibration frequency of each pixel point in the outlines;
calculating time sequence change data of the resonance frequency related to the head within preset time according to the vibration frequency of each pixel point;
and outputting a plurality of psychological states and the grades of the psychological states according to the time sequence change data.
2. The method according to claim 1, wherein said processing the plurality of frames of head images, locating contours of a plurality of organs associated with the head, and obtaining a vibration frequency of each pixel point within the contours comprises:
based on the human biological characteristics, positioning the outlines of pupils, eyes, a nose and the head on each frame of head image, and all pixel points in the outlines;
and calculating the vibration frequency of each pixel point according to the displacement of each pixel point in the continuous head images.
3. The method according to claim 2, wherein the calculating of the time-series variation data of the head-related resonance frequency within a preset time according to the vibration frequency of each pixel point specifically comprises:
and respectively carrying out weighted average calculation on the vibration frequency of each pixel point to obtain time sequence change data of the pupil resonance frequency, the eyeball resonance frequency, the respiration resonance frequency and the head resonance frequency in a preset time.
4. The method according to claim 1, wherein outputting a plurality of mental states and levels of the mental states according to the time-series change data comprises:
and inputting the time sequence change data into a preset multi-resonance frequency psychological analysis model, and outputting a plurality of psychological states and grades of the psychological states.
5. The method of claim 4, further comprising:
and performing deep learning engine association and analysis, and optimizing the multi-resonance-frequency psychological analysis model.
6. A mental state detection apparatus, comprising:
the acquisition module is used for acquiring a video stream of the head of a human body and acquiring a multi-frame head image from the video stream;
the processing module is used for processing the multi-frame head images, positioning outlines of a plurality of organs related to the heads, and acquiring the vibration frequency of each pixel point in the outlines;
the calculation module is used for calculating time sequence change data of the resonance frequency related to the head within preset time according to the vibration frequency of each pixel point;
and the output module is used for outputting a plurality of psychological states and the grades of the psychological states according to the time sequence change data.
7. The apparatus of claim 6,
the processing module is specifically used for positioning the outlines of the pupil, the eyes, the nose and the head and all pixel points in the outlines on each frame of head image based on the human biological characteristics; and calculating the vibration frequency of each pixel point according to the displacement of each pixel point in the continuous head images.
8. The apparatus of claim 7,
the calculation module is specifically configured to perform weighted average calculation on the vibration frequency of each pixel point respectively to obtain time sequence change data of the pupil resonance frequency, the eyeball resonance frequency, the respiration resonance frequency and the head resonance frequency within a preset time.
9. The apparatus of claim 6,
the output module is specifically configured to input the time-series change data into a preset multi-resonance-frequency psychological analysis model, and output multiple psychological states and levels of the multiple psychological states.
10. The apparatus of claim 9, further comprising:
and the optimization module is used for performing deep learning engine association and analysis and optimizing the multi-resonance-frequency psychological analysis model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211432954.0A CN115886817A (en) | 2022-11-16 | 2022-11-16 | Psychological state detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211432954.0A CN115886817A (en) | 2022-11-16 | 2022-11-16 | Psychological state detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115886817A true CN115886817A (en) | 2023-04-04 |
Family
ID=86486432
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211432954.0A Pending CN115886817A (en) | 2022-11-16 | 2022-11-16 | Psychological state detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115886817A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2009140207A (en) * | 2009-10-26 | 2011-05-10 | Многопрофильное предприятие ООО "Элсис" (RU) | METHOD OF OBTAINING INFORMATION ABOUT PSYCHOPHYSIOLOGICAL CONDITION OF A LIVING OBJECT |
KR20160100654A (en) * | 2015-02-16 | 2016-08-24 | 주식회사 바이브라시스템 | method and apparatus for detecting drowsiness by physiological signal by using video |
CN106618608A (en) * | 2016-09-29 | 2017-05-10 | 金湘范 | Device and method for monitoring dangerous people based on video psychophysiological parameters |
CN109902574A (en) * | 2019-01-24 | 2019-06-18 | 北京元和新通科技有限公司 | The high-risk personnel detection device and method of human body presentation variation measurement human body mood |
CN112957042A (en) * | 2021-01-29 | 2021-06-15 | 特路(北京)科技有限公司 | Non-contact target emotion recognition method and system |
CN113491519A (en) * | 2020-04-02 | 2021-10-12 | 哈曼国际工业有限公司 | Digital assistant based on emotion-cognitive load |
-
2022
- 2022-11-16 CN CN202211432954.0A patent/CN115886817A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2009140207A (en) * | 2009-10-26 | 2011-05-10 | Многопрофильное предприятие ООО "Элсис" (RU) | METHOD OF OBTAINING INFORMATION ABOUT PSYCHOPHYSIOLOGICAL CONDITION OF A LIVING OBJECT |
KR20160100654A (en) * | 2015-02-16 | 2016-08-24 | 주식회사 바이브라시스템 | method and apparatus for detecting drowsiness by physiological signal by using video |
CN106618608A (en) * | 2016-09-29 | 2017-05-10 | 金湘范 | Device and method for monitoring dangerous people based on video psychophysiological parameters |
CN109902574A (en) * | 2019-01-24 | 2019-06-18 | 北京元和新通科技有限公司 | The high-risk personnel detection device and method of human body presentation variation measurement human body mood |
CN113491519A (en) * | 2020-04-02 | 2021-10-12 | 哈曼国际工业有限公司 | Digital assistant based on emotion-cognitive load |
CN112957042A (en) * | 2021-01-29 | 2021-06-15 | 特路(北京)科技有限公司 | Non-contact target emotion recognition method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10206615B2 (en) | Content evaluation system and content evaluation method using the system | |
EP2698112B1 (en) | Real-time stress determination of an individual | |
CN114209324B (en) | Psychological assessment data acquisition method based on image visual cognition and VR system | |
CN109199410B (en) | Speech cognition assessment method based on eye movement | |
CN111920420B (en) | Patient behavior multi-modal analysis and prediction system based on statistical learning | |
CN110619301A (en) | Emotion automatic identification method based on bimodal signals | |
CN108478224A (en) | Intense strain detecting system and detection method based on virtual reality Yu brain electricity | |
CN111887867A (en) | Method and system for analyzing character formation based on expression recognition and psychological test | |
CN109508755B (en) | Psychological assessment method based on image cognition | |
US20180279935A1 (en) | Method and system for detecting frequency domain cardiac information by using pupillary response | |
CN109009052A (en) | The embedded heart rate measurement system and its measurement method of view-based access control model | |
CN110338759B (en) | Facial pain expression data acquisition method | |
CN113647950A (en) | Psychological emotion detection method and system | |
CN114565957A (en) | Consciousness assessment method and system based on micro expression recognition | |
US10631727B2 (en) | Method and system for detecting time domain cardiac parameters by using pupillary response | |
CN115089190B (en) | Pilot multi-mode physiological signal synchronous acquisition system based on simulator | |
CN106096544B (en) | Non-contact blink and heart rate joint detection system and method based on second-order blind identification | |
CN115101191A (en) | Parkinson disease diagnosis system | |
WO2023012818A1 (en) | A non-invasive multimodal screening and assessment system for human health monitoring and a method thereof | |
CN111222464A (en) | Emotion analysis method and system | |
CN115089179A (en) | Psychological emotion insights analysis method and system | |
CN115886817A (en) | Psychological state detection method and device | |
CN111938671A (en) | Anxiety trait quantification method based on multi-dimensional internal perception features | |
CN115439920B (en) | Consciousness state detection system and equipment based on emotional audio-visual stimulation and facial expression | |
CN110693508A (en) | Multi-channel cooperative psychophysiological active sensing method and service robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |