CN113143274B - Camera-based emotional early warning method - Google Patents
Camera-based emotional early warning method Download PDFInfo
- Publication number
- CN113143274B CN113143274B CN202110352232.3A CN202110352232A CN113143274B CN 113143274 B CN113143274 B CN 113143274B CN 202110352232 A CN202110352232 A CN 202110352232A CN 113143274 B CN113143274 B CN 113143274B
- Authority
- CN
- China
- Prior art keywords
- emotional
- data
- sub
- basic data
- unified format
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient; User input means
- A61B5/746—Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Psychiatry (AREA)
- Developmental Disabilities (AREA)
- Child & Adolescent Psychology (AREA)
- Social Psychology (AREA)
- Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Educational Technology (AREA)
- Physiology (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Description
技术领域Technical field
本发明涉及情绪监测预警领域,具体涉及一种基于摄像头的情绪预警方法。The invention relates to the field of emotion monitoring and early warning, and in particular to a camera-based emotion early warning method.
背景技术Background technique
在人们的日常工作生活中,情绪状态对于工作效果的影响是比较明显的,尤其是长时间持续的工作或者需要注意力高度集中的工作,受到工作者情绪状态的影响更为明显。比如学生,在课堂上的听课状态其实是会受到学生自身情绪状态的影响的,如果教师能够根据学生的情绪状态做出适当引导,必然能够提高授课效率。In people's daily work and life, the impact of emotional state on work performance is relatively obvious, especially work that lasts for a long time or requires a high degree of concentration, and is more obviously affected by the emotional state of the worker. For example, students' listening state in class will actually be affected by their own emotional states. If teachers can provide appropriate guidance based on students' emotional states, teaching efficiency will certainly be improved.
对于一些特别重要的工作岗位,比如核电站控制室中的工作人员,需要注意力高度集中,实时判断核电站的运行状态,如果能够实时获知工作人员的情绪状态,当情绪状态不稳定或者明显疲劳时,及时提醒换班,必然能够进一步增强核电站的安全状况,从根源上消除因人为疏漏带来的安全隐患。For some particularly important jobs, such as staff in the control room of a nuclear power plant, they need to be highly concentrated and judge the operating status of the nuclear power plant in real time. If the emotional state of the staff can be known in real time, when the emotional state is unstable or obviously tired, Timely reminders for shift changes will definitely further enhance the safety of nuclear power plants and eliminate safety hazards caused by human negligence from the root.
还有一些特殊岗位,比如公交车司机,如果其情绪处于极端状态,会对公共安全造成潜在威胁,如果能够及时获知司机的情绪状态,在必要时进行提醒或者警报,必然能够进一步增强社会安全状况。There are also some special positions, such as bus drivers, if their emotions are in an extreme state, it will pose a potential threat to public safety. If the driver's emotional state can be known in time, and reminders or alarms can be issued when necessary, social security can be further enhanced. .
然而目前还没有能对正在正常工作人员的情绪进行有效测定的技术方案,虽然通过人脸识别的方式能够判断人面部表情,但面部表情并不能完全体现人体的情绪状态,所以这方面的研究水平还需要进一步完善提高。However, there is currently no technical solution that can effectively measure the emotions of normal workers. Although facial expressions can be judged through facial recognition, facial expressions cannot fully reflect the emotional state of the human body, so the level of research in this area is It still needs to be further improved.
另外,现有技术测量得到的数据是单一维度的,即紧张程度,也有些研究中称的“受挫折”(frustrated)程度,这些在心理学领域统称为情绪的唤醒程度(即从沉静到激越之间变化的维度),然而情绪不仅仅只有这一个维度,情绪效价(即情绪的积极-消极维度的性质)也是很重要的判别指标,失去了情绪的积极和消极判定,情绪判别是不完整的。如同样为非常激越的状态,可以有暴怒(消极)和狂喜(积极)两种极端,同为非常沉静的状态,也可以有绝望(消极)和安详两种非常不同的状态。In addition, the data measured by the existing technology are single-dimensional, that is, the degree of tension, or what some studies call the "frustrated" degree. In the field of psychology, these are collectively called the degree of emotional arousal (i.e., from calm to agitated). (dimension of change), however, emotion is not only one dimension. Emotional valence (i.e., the nature of the positive-negative dimension of emotion) is also a very important discriminant index. Without the positive and negative judgment of emotion, emotion discrimination is impossible. complete. For example, the same very agitated state can have two extremes of rage (negative) and ecstasy (positive). The same very calm state can also have two very different states of despair (negative) and tranquility.
由于上述原因,本发明人对现有的情绪识别判断方法做了深入研究,以期待设计出一种能够解决上述问题的基于摄像头的情绪预警方法。Due to the above reasons, the inventor of the present invention conducted in-depth research on existing emotion recognition and judgment methods in order to design a camera-based emotion warning method that can solve the above problems.
发明内容Contents of the invention
为了克服上述问题,本发明人进行了锐意研究,设计出一种基于摄像头的情绪预警方法,该方法中,通过摄像头实时获得人体照片,通过使用人脸检测算法找到一个或多个存在于图像中的人脸,利用相对几何关系,寻找到面部无毛发区域,测算该区域的图像亮度变化,得到连续的数组,进而得到被监测人员的心脏搏动间期,结合被监测人员的面部表情,通过情绪判断模型获得被监测人员的情绪状况,进而判断是否需要报警,从而完成本发明。In order to overcome the above problems, the inventor conducted intensive research and designed a camera-based emotional early warning method. In this method, human body photos are obtained in real time through the camera, and one or more people present in the image are found by using a face detection algorithm. Based on the human face, we use the relative geometric relationship to find the facial hair-free area, measure the image brightness changes in this area, and obtain a continuous array, and then obtain the heart beat interval of the monitored person, combined with the facial expression of the monitored person, through the emotion The judgment model obtains the emotional state of the monitored person, and then judges whether an alarm is needed, thereby completing the present invention.
具体来说,本发明的目的在于提供以下方面:一种基于摄像头的情绪预警方法,该方法包括如下步骤:Specifically, the purpose of the present invention is to provide the following aspects: a camera-based emotion early warning method, which method includes the following steps:
步骤1,通过摄像头实时拍摄获得包含被监测人员面部的图像照片;Step 1: Obtain images and photos containing the face of the monitored person through real-time shooting with the camera;
步骤2,通过人脸识别区分图像照片中每个被监测人员的身份信息,并为每个被监测人员设置对应的独立存储单元;Step 2: Use facial recognition to distinguish the identity information of each monitored person in the image and photo, and set up a corresponding independent storage unit for each monitored person;
步骤3,通过读取图像照片获得被监测人员的心脏搏动间期和面部表情,并都存储在独立存储单元中;Step 3: Obtain the heart beat intervals and facial expressions of the monitored person by reading the images and photos, and store them in an independent storage unit;
步骤4,实时将心脏搏动间期和面部表情输入到情绪判断模型中判断被监测人员的情绪状况;Step 4: Input heart beat intervals and facial expressions into the emotion judgment model in real time to determine the emotional state of the person being monitored;
步骤5,在被监测人员的情绪处于预警状态时,发出报警信息。Step 5: When the monitored person's emotion is in a warning state, an alarm message is issued.
其中,所述步骤3包括如下子步骤,Among them, the step 3 includes the following sub-steps:
子步骤a,筛选步骤1中得到的图像照片,删除不能用于读取心脏搏动间期的图像照片;Sub-step a, filter the image photos obtained in step 1, and delete the image photos that cannot be used to read the heart beat interval;
子步骤b,在剩余的图像照片中,利用相对几何关系,寻找到面部无毛发部分作为检测区;Sub-step b, in the remaining image photos, use relative geometric relationships to find the hairless part of the face as the detection area;
子步骤c,测算连续的图像照片中该区域的图像亮度变化,得到连续的数组,Sub-step c, measure the image brightness change of this area in the continuous image photos, and obtain a continuous array,
优选地,该数组即为描述人心跳活动的曲线,平均亮度较高时代表心脏舒张,即心电波谷,平均亮度较低时代表心脏收缩,即心电波峰;两个波峰之间的时间间隔即为心脏搏动间期。Preferably, the array is a curve describing human heartbeat activity. When the average brightness is higher, it represents cardiac diastole, that is, the trough of the ECG wave. When the average brightness is lower, it represents the contraction of the heart, that is, the ECG wave peak; the time interval between the two peaks. That is the heart beat interval.
其中,所述情绪判断模型通过下述子步骤获得:Wherein, the emotion judgment model is obtained through the following sub-steps:
子步骤1,通过收集设备收集生理数据和面部表情,所述生理数据包括心脏搏动间期,并将该生理数据转换为交感神经的活动指标和副交感神经的活动指标;Sub-step 1, collect physiological data and facial expressions through the collection device, the physiological data includes heart beat intervals, and convert the physiological data into sympathetic nerve activity indicators and parasympathetic nerve activity indicators;
子步骤2,设置情绪唤醒标签和情绪效价标签,在情绪唤醒标签中记录具体的情绪激越程度,在情绪效价标签中记录具体的情绪效价,将综合神经活动指标数据、面部表情数据和情绪标签组合为基础数据;Sub-step 2, set the emotional arousal tag and the emotional valence tag, record the specific emotional arousal level in the emotional arousal tag, record the specific emotional valence in the emotional valence tag, and combine the neural activity index data, facial expression data and The combination of emotion tags is the basic data;
子步骤3,调整该基础数据的格式得到统一格式的基础数据,判断该统一格式的基础数据是否符合要求;Sub-step 3: Adjust the format of the basic data to obtain basic data in a unified format, and determine whether the basic data in a unified format meets the requirements;
子步骤4,从符合要求的统一格式的基础数据中选取可用数据;Sub-step 4: Select available data from basic data in a unified format that meets the requirements;
子步骤5,根据子步骤4中的可用数据获得情绪判断模型。Sub-step 5: Obtain the emotion judgment model based on the available data in sub-step 4.
其中,每个综合神经活动指标包扩下述数据中的一种或多种:交感神经的活动指标、副交感神经的活动指标、交感神经的活动指标与副交感神经的活动指标之商、交感神经的活动指标与副交感神经的活动指标之和、交感神经的活动指标与副交感神经的活动指标之差。Wherein, each comprehensive nerve activity index includes one or more of the following data: sympathetic nerve activity index, parasympathetic nerve activity index, quotient of sympathetic nerve activity index and parasympathetic nerve activity index, sympathetic nerve activity index The sum of the activity index and the parasympathetic nerve activity index, and the difference between the sympathetic nerve activity index and the parasympathetic nerve activity index.
其中,子步骤3中判断统一格式的基础数据是否符合要求包括如下亚子步骤:Among them, in sub-step 3, judging whether the basic data in a unified format meets the requirements includes the following sub-steps:
亚子步骤1,将所有统一格式的基础数据随机地按照预定比例分学习组和检验组;Sub-step 1: randomly divide all basic data in a unified format into learning groups and testing groups according to predetermined proportions;
亚子步骤2,利用学习组中的数据冲刷模型,再用检验组中的每个数据逐一验证该模型,并分别记录检验组中每个数据的验证结果;Sub-step 2, use the data in the learning group to flush the model, then use each data in the test group to verify the model one by one, and record the verification results of each data in the test group respectively;
亚子步骤3,重复亚子步骤1和亚子步骤2,其中,曾经被分配到检验组中的统一格式的基础数据不再被分配到检验组中,确保每个统一格式的基础数据都曾在检验组中对被学习组中数据冲刷过的模型做过验证,直至获得所有统一格式的基础数据对应的验证结果;Sub-step 3, repeat sub-step 1 and sub-step 2, in which the basic data in the unified format that were once assigned to the inspection group are no longer allocated to the inspection group, ensuring that each basic data in the unified format has been assigned to the inspection group. In the testing group, the models that have been washed out by the data in the learning group are verified until all verification results corresponding to the basic data in a unified format are obtained;
亚子步骤4,解算所有统一格式的基础数据验证结果的总通过率,当总通过率大于85%时,所述统一格式的基础数据符合要求,否则删除所述统一格式的基础数据,重复子步骤1和子步骤2。Sub-step 4: Calculate the total pass rate of all unified format basic data verification results. When the total pass rate is greater than 85%, the unified format basic data meets the requirements. Otherwise, delete the unified format basic data and repeat. Sub-step 1 and sub-step 2.
其中,子步骤4中得到可用数据包括如下亚子步骤:Among them, obtaining available data in sub-step 4 includes the following sub-steps:
亚子步骤a,多次重复亚子步骤1-3,每次重复亚子步骤1时都得到由不同的统一格式的基础数据组成的检验组;使得每个统一格式的基础数据都对应有多个验证结果,再分别解算每个统一格式的基础数据对应的平均通过率;Sub-step a, repeat sub-steps 1-3 multiple times. Each time you repeat sub-step 1, you will get a test group composed of different basic data in a unified format; so that each basic data in a unified format corresponds to multiple Verify the results, and then calculate the average pass rate corresponding to each basic data in a unified format;
亚子步骤b,找到并隐藏1例平均通过率最低的统一格式的基础数据,利用剩余的统一格式的基础数据再次执行亚子步骤1-4,观察总通过率相较于隐藏数据前是否提高,如果总通过率提高,则删除该被隐藏的统一格式的基础数据,并执行亚子步骤c;如果总通过率未提高,则恢复被隐藏的数据,挑选并隐藏平均通过率第二低的统一格式的基础数据,重复以上过程,直至总通过率提高;Sub-step b: Find and hide the unified format basic data of 1 case with the lowest average pass rate, use the remaining unified format basic data to perform sub-steps 1-4 again, and observe whether the overall pass rate is improved compared to before hiding the data. , if the total pass rate increases, delete the hidden basic data in a unified format, and perform sub-step c; if the total pass rate does not increase, restore the hidden data, select and hide the second lowest average pass rate For basic data in a unified format, repeat the above process until the overall pass rate increases;
亚子步骤c,在总通过率提高后,以剩余的统一格式的基础数据为基础,重复亚子步骤a和亚子步骤b,发现总通过率提高后再以当前剩余的统一格式的基础数据为基础,继续重复亚子步骤a和亚子步骤b,直至总通过率达到90%以上,或者删除的统一格式的基础数据达到总的统一格式的基础数据的30%时为止,此时剩余的统一格式的基础数据即为可用数据。Sub-step c, after the overall pass rate is improved, use the remaining basic data in a unified format as the basis, repeat sub-step a and sub-step b, and find that the overall pass rate is improved, then use the current remaining basic data in a unified format. As the basis, continue to repeat sub-step a and sub-step b until the total pass rate reaches more than 90%, or until the deleted basic data in a unified format reaches 30% of the total basic data in a unified format, at this time the remaining Basic data in a unified format is usable data.
其中,在子步骤5中,所述情绪判断模型包括所述情绪唤醒预测模型和情绪效价预测模型;Wherein, in sub-step 5, the emotion judgment model includes the emotional arousal prediction model and the emotional valence prediction model;
获得情绪判断模型的过程中,将每个可用数据中的综合神经活动指标数据、面部表情数据和情绪唤醒数据拼接成一个数据段,作为学习材料,通过机器学习获得情绪唤醒预测模型;In the process of obtaining the emotional judgment model, the comprehensive neural activity indicator data, facial expression data and emotional arousal data in each available data are spliced into a data segment, which is used as learning material to obtain the emotional arousal prediction model through machine learning;
将每个可用数据中的综合神经活动指标数据、面部表情数据和情绪效价数据拼接成一个数据段,作为学习材料,通过机器学习获得情绪效价预测模型。The comprehensive neural activity indicator data, facial expression data and emotional valence data in each available data are spliced into a data segment and used as learning materials to obtain an emotional valence prediction model through machine learning.
其中,所述步骤4中的情绪状况包括情绪激越程度和情绪效价。The emotional state in step 4 includes emotional agitation and emotional valence.
其中,在步骤5中,将被监测人员的情绪状况与平静状态下的平均情绪激越程度值和平均情绪效价值相比较,Among them, in step 5, the emotional state of the monitored person is compared with the average emotional agitation value and the average emotional valence value in the calm state,
当监测人员从事重要、高难度或高强度作业时,When monitoring personnel are engaged in important, difficult or high-intensity operations,
其情绪激越程度高于平静状态下的平均情绪激越程度值1.5个标准差,The level of emotional agitation is 1.5 standard deviations higher than the average level of emotional agitation in the calm state.
或其情绪激越程度低于平静状态下的平均情绪激越程度值1个标准差,Or the level of emotional agitation is 1 standard deviation below the average level of emotional agitation in a calm state,
或其情绪效价低于平静状态下的平均情绪效价值1.5个标准差时,发出报警信息;Or when its emotional valence is 1.5 standard deviations lower than the average emotional valence in a calm state, an alarm message is issued;
当监测人员从事普通工作时,When monitoring personnel are engaged in ordinary work,
其情绪激越程度高于平静状态下的平均情绪激越程度值1.5个标准差,The level of emotional agitation is 1.5 standard deviations higher than the average level of emotional agitation in the calm state.
或其情绪激越程度低于平静状态下的平均情绪激越程度值1个标准差,Or the level of emotional agitation is 1 standard deviation below the average level of emotional agitation in a calm state,
或其情绪效价低于平静状态下的标平均绪效价值1.5个标准差时,发出报警信息;Or when the emotional valence is 1.5 standard deviations lower than the standard average emotional valence value in the calm state, an alarm message is issued;
当被监测人员情绪激越程度高于平静状态下的平均情绪激越程度值2个标准差,且其情绪效价低于平静状态下的平均情绪效价值2个标准差时,该被监测人员对公共安全产生潜在威胁,发出报警信息。When the monitored person's emotional agitation is 2 standard deviations higher than the average emotional agitation value in the calm state, and his or her emotional valence is 2 standard deviations lower than the average emotional valence in the calm state, the person being monitored is responsible for the public Potential threats to security occur and an alarm message is issued.
本发明所具有的有益效果包括:The beneficial effects of the present invention include:
(1)根据本发明提供的基于摄像头的情绪预警方法能够根据被监测者的不同工作性质,设置不同的报警条件,从而扩大该方法的应用范围;(1) The camera-based emotion early warning method provided by the present invention can set different alarm conditions according to the different work properties of the monitored person, thereby expanding the application scope of the method;
(2)根据本发明提供的基于摄像头的情绪预警方法中设置有基于大量样本冲刷的情绪判断模型,能够准确及时地根据图像信息解算出被监测者的情绪状态;(2) According to the camera-based emotion warning method provided by the present invention, an emotion judgment model based on a large number of sample washes is provided, which can accurately and timely calculate the emotional state of the monitored person based on image information;
(3)根据本发明提供的基于摄像头的情绪预警方法中采用了二维情绪评价方案,不仅仅能测量情绪的唤醒,还对情绪效价做出了估计,相较于之前2分类或4分类的情绪评定技术,本技术可以输出100种不同强度、性质的情绪评定,其结果更加真实、接近常识也更容易被人理解,故而在实际生产生活中更具有可用性;(3) The camera-based emotion warning method provided by the present invention adopts a two-dimensional emotion evaluation scheme, which can not only measure the arousal of emotions, but also estimate the emotional valence. Compared with the previous 2 classification or 4 classification Emotion assessment technology, this technology can output 100 kinds of emotion assessments of different intensities and properties. The results are more real, close to common sense and easier to be understood by people, so they are more usable in actual production and life;
(4)根据本发明提供的基于摄像头的情绪预警方法能够实时判断出被监测者是否适合参与重要、高难度、高强度作业;还能够实时判断出被监测者是否正在经历激烈的心理活动并对公共安全具有威胁性。(4) The camera-based emotion warning method provided by the present invention can determine in real time whether the person being monitored is suitable for participating in important, difficult, and high-intensity tasks; it can also determine in real time whether the person being monitored is experiencing intense psychological activities and respond to Public safety is a threat.
附图说明Description of the drawings
图1示出根据本发明一种优选实施方式的基于摄像头的情绪预警方法整体逻辑图;Figure 1 shows the overall logic diagram of a camera-based emotion early warning method according to a preferred embodiment of the present invention;
图2示出本发明实施例中通过基于摄像头的情绪预警方法获得的一位被监测人员在一天内的情绪激越程度变化情况;Figure 2 shows the changes in the emotional agitation level of a monitored person within a day obtained through the camera-based emotion warning method in the embodiment of the present invention;
图3示出本发明实施例中通过基于摄像头的情绪预警方法获得的一位被监测人员在一天内的情绪效价变化情况;Figure 3 shows the changes in emotional valence of a monitored person within a day obtained through the camera-based emotion warning method in the embodiment of the present invention;
图4示出本发明实施例中一位被监测人员对其自身一天内情绪状况的评价显示界面。Figure 4 shows an evaluation display interface of a monitored person's emotional status within a day in the embodiment of the present invention.
具体实施方式Detailed ways
下面通过附图和实施例对本发明进一步详细说明。通过这些说明,本发明的特点和优点将变得更为清楚明确。The present invention will be further described in detail below through the drawings and examples. Through these descriptions, the features and advantages of the present invention will become more apparent.
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。The word "exemplary" as used herein means "serving as an example, example, or illustrative." Any embodiment described herein as "exemplary" is not necessarily to be construed as superior or superior to other embodiments. Although various aspects of the embodiments are illustrated in the drawings, the drawings are not necessarily drawn to scale unless otherwise indicated.
根据本发明提供的基于摄像头的情绪预警方法,如图1中所示;该方法包括如下步骤:According to the camera-based emotion warning method provided by the present invention, as shown in Figure 1; the method includes the following steps:
步骤1,通过摄像头实时拍摄获得包含被监测人员面部的图像照片;Step 1: Obtain images and photos containing the face of the monitored person through real-time shooting with the camera;
步骤2,区分图像照片中每个被监测人员的身份信息,并为每个被监测人员设置对应的独立存储单元;Step 2: Distinguish the identity information of each monitored person in the image and photo, and set up a corresponding independent storage unit for each monitored person;
步骤3,通过读取图像照片获得被监测人员的心脏搏动间期和面部表情,并都存储在独立存储单元中;Step 3: Obtain the heart beat intervals and facial expressions of the monitored person by reading the images and photos, and store them in an independent storage unit;
步骤4,实时将心脏搏动间期和面部表情输入到情绪判断模型中判断被监测人员的情绪状况;Step 4: Input heart beat intervals and facial expressions into the emotion judgment model in real time to determine the emotional state of the person being monitored;
步骤5,在被监测人员的情绪处于预警状态时,发出报警信息。Step 5: When the monitored person's emotion is in a warning state, an alarm message is issued.
在一个优选的实施方式中,在步骤1中,所述摄像头可以设置在被监测人员工作位置附近,如被监测人员是核电站控制室的操作人员,可以将该摄像头放置在显示屏附近,如被监测人员是学生,可以将该摄像头放置在黑板附近,如被监测人员是公交车司机,可以将该摄像头放置在前车窗附近,即该摄像头优选地放置在被监测人员面部的朝向方向,尽量多地捕获到被监测人员的面部,该摄像头的焦距可调,可以同时拍摄多个被监测人员。In a preferred embodiment, in step 1, the camera can be placed near the working position of the person being monitored. For example, if the person being monitored is an operator in the control room of a nuclear power plant, the camera can be placed near the display screen, such as the person being monitored. If the monitoring person is a student, the camera can be placed near the blackboard. If the monitored person is a bus driver, the camera can be placed near the front window. That is, the camera is preferably placed in the direction of the face of the monitored person, as far as possible. The face of the monitored person is captured in multiple places. The camera has an adjustable focal length and can capture multiple monitored persons at the same time.
在一个优选的实施方式中,在步骤2中,通过人脸识别的方法来区分图像照片中每个被监测人员的身份信息,并为每个被监测人员设置对应的独立存储单元;所述人脸识别的方法可以选用开源的openface人脸辨识工具进行处理。In a preferred embodiment, in step 2, the identity information of each monitored person in the image photo is distinguished through face recognition, and a corresponding independent storage unit is set for each monitored person; the person The face recognition method can be processed using the open source openface face recognition tool.
在一个优选的实施方式中,所述步骤3包括如下子步骤,In a preferred embodiment, step 3 includes the following sub-steps:
子步骤a,筛选步骤1中得到的图像照片,删除不能用于读取心脏搏动间期的图像照片;Sub-step a, filter the image photos obtained in step 1, and delete the image photos that cannot be used to read the heart beat interval;
其中,具体来说,需要删除的图像包括:①在1000毫秒的时间窗中,面部图像亮度均值变化大于1(亮度范围为0-255)的图像;②人脸未被openface捕获的图像;③在1000毫秒内人脸轮廓位移超过画面纵向解析度的1%(如纵向解析度为1080像素,则原则上轮廓位移超过10像素即删除该期间画面)。Among them, specifically, the images that need to be deleted include: ① In the 1000 millisecond time window, the average change in facial image brightness is greater than 1 (brightness range is 0-255); ② Images whose faces are not captured by openface; ③ If the facial contour displacement exceeds 1% of the vertical resolution of the picture within 1000 milliseconds (for example, if the vertical resolution is 1080 pixels, in principle, if the contour displacement exceeds 10 pixels, the picture during this period will be deleted).
子步骤b,在剩余的图像照片中,利用相对几何关系,寻找到面部无毛发部分作为检测区;即采用额头和面颊作为检测区。Sub-step b, in the remaining image photos, use relative geometric relationships to find the hairless part of the face as the detection area; that is, use the forehead and cheek as the detection area.
子步骤c,测算连续的图像照片中该区域的图像亮度变化,得到连续的数组,Sub-step c, measure the image brightness change of this area in the continuous image photos, and obtain a continuous array,
优选地,该数组即为描述人心跳活动的曲线,平均亮度较高时代表心脏舒张,即心电波谷,平均亮度较低时代表心脏收缩,即心电波峰;两个波峰之间的时间间隔即为心脏搏动间期。Preferably, the array is a curve describing human heartbeat activity. When the average brightness is higher, it represents cardiac diastole, that is, the trough of the ECG wave. When the average brightness is lower, it represents the contraction of the heart, that is, the ECG wave peak; the time interval between the two peaks. That is the heart beat interval.
利用巴特沃斯滤波器对原始信号滤波,保留0.5-2Hz波段信号,在滤波后的信号中寻找波峰和波谷,其中波峰即在500毫秒的信号窗中数值大于两侧的点,波谷即500毫秒信号床中数值小于两侧的点;Use the Butterworth filter to filter the original signal, retain the 0.5-2Hz band signal, and find the peaks and troughs in the filtered signal. The peak is the point in the signal window of 500 milliseconds that is greater than the value on both sides, and the trough is 500 milliseconds. Points in the signal bed whose values are smaller than the two sides;
本申请中优选地,产生一组数据需要在不小于25Hz采样频率的条件下连续采集至少7秒钟的画面。Preferably in this application, generating a set of data requires continuous collection of images for at least 7 seconds at a sampling frequency of no less than 25 Hz.
优选地,利用摄像头采集连续图像数据所采集到的数据为彩色或黑白,对于黑白图像可直接使用,对于彩色图像,优选地使用红、绿、蓝中的绿色通道。另外通过颜色坐标系变换,可将红绿蓝通道转换成HSV或HLS色彩空间,在HSV色彩空间中以V通道(即亮度通道)值作为输入指标,在HLS色彩空间中以L通道(也成为亮度通道)值作为输入指标。采集图像的时间频率应在25Hz以上,即每秒钟至少采集25帧的连续画面。Preferably, the data collected by using a camera to collect continuous image data is in color or black and white. Black and white images can be used directly. For color images, it is preferred to use the green channel among red, green and blue. In addition, through color coordinate system transformation, the red, green, and blue channels can be converted into HSV or HLS color space. In the HSV color space, the V channel (i.e., brightness channel) value is used as the input index, and in the HLS color space, the L channel (also known as Luminance channel) value as input indicator. The time frequency of image collection should be above 25Hz, that is, at least 25 frames of continuous images should be collected per second.
在一个优选的实施方式中,所述情绪判断模型通过下述子步骤获得:In a preferred implementation, the emotion judgment model is obtained through the following sub-steps:
子步骤1,通过收集设备收集生理数据和面部表情,所述生理数据包括心脏搏动间期,并将该生理数据转换为交感神经的活动指标和副交感神经的活动指标;所述心脏搏动间期也称之为R-R间期。Sub-step 1, collect physiological data and facial expressions through the collection device, the physiological data includes heart beat intervals, and convert the physiological data into sympathetic nerve activity indicators and parasympathetic nerve activity indicators; the heart beat interval is also Called the R-R interval.
子步骤2,设置情绪唤醒标签和情绪效价标签,在情绪唤醒标签中选择具体的情绪激越程度,在情绪效价标签中选择具体的情绪效价,将综合神经活动指标数据、面部表情数据和情绪标签组合为基础数据;Sub-step 2: Set the emotional arousal label and the emotional valence label, select the specific emotional arousal level in the emotional arousal label, select the specific emotional valence in the emotional valence label, and combine the neural activity index data, facial expression data and The combination of emotion tags is the basic data;
子步骤3,调整该基础数据的格式,得到统一格式的基础数据,判断该统一格式的基础数据是否符合要求;Sub-step 3: Adjust the format of the basic data to obtain basic data in a unified format, and determine whether the basic data in a unified format meets the requirements;
子步骤4,从符合要求的统一格式的基础数据中选取可用数据;Sub-step 4: Select available data from basic data in a unified format that meets the requirements;
子步骤5,根据步骤4中的可用数据获得情绪判断模型。Sub-step 5: Obtain the emotion judgment model based on the available data in step 4.
在一个优选的实施方式中,所述收集设备包括可穿戴手环、智能手表和摄像头。优选地,所述收集设备还可以包括按摩椅、跑步机等。在通过收集设备收集到生理数据并记录标签数据,可以将所有的数据都实时传输到远程服务器进行统计保存,也可以在该收集设备中集成存储芯片进行实时存储和计算处理。In a preferred embodiment, the collection device includes a wearable bracelet, a smart watch and a camera. Preferably, the collection equipment may also include massage chairs, treadmills, etc. After collecting physiological data and recording tag data through the collection device, all data can be transmitted to a remote server in real time for statistical storage, or a memory chip can be integrated into the collection device for real-time storage and calculation processing.
优选地,在子步骤1中,根据收集到的每个心脏搏动间期对应转换输出两组数据,分别为交感神经的活动指标和副交感神经的活动指标,从而使得本申请中的方案具有较细腻的时间颗粒度。Preferably, in sub-step 1, two sets of data are converted and output based on the collected heart beat intervals, which are the activity indicators of the sympathetic nerves and the activity indicators of the parasympathetic nerves, thereby making the scheme in this application more delicate. time granularity.
在子步骤1中,两种神经共同影响心脏搏动且神经活动周期性相互的前摄影响最终构成了心率变异。In sub-step 1, the two nerves jointly influence the heart beat and the periodic proactive influence of nerve activities on each other ultimately constitutes heart rate variability.
优选地,所述情绪唤醒标签中设置有多个能够代表情绪唤醒程度的数值,可以根据实际状况选择对应的数值,优选地,所述情绪唤醒标签中设置有5-10个数值档位,根据参与者的实际情况选择最为接近的数值档位。所述情绪唤醒标签中表征的是情绪唤醒程度,最低的数值代表完全平静,数值越大表示情绪越为激越。Preferably, the emotional arousal tag is provided with a plurality of numerical values that can represent the degree of emotional arousal. The corresponding numerical value can be selected according to the actual situation. Preferably, the emotional arousal tag is provided with 5-10 numerical gears. According to Participants choose the numerical range closest to their actual situation. The emotional arousal label represents the degree of emotional arousal. The lowest value represents complete calmness, and the larger the value, the more excited the emotion.
所述情绪效价标签中设置有多个能够代表情绪效价的数值,可以根据实际状况选择对应的数值,优选地,所述情绪效价标签中设置有2-10个数值档位,根据参与者的实际情况选择最为接近的数值档位。所述情绪效价标签中表征情绪的积极及消极程度,最低的数值代表最为消极,数值越大表示情绪越积极。数值档位一致的两个情绪效价标签中的数据格式统一,数值档位一致的两个情绪唤醒标签中的数据格式统一。The emotional valence tag is provided with a plurality of numerical values that can represent the emotional valence, and the corresponding value can be selected according to the actual situation. Preferably, the emotional valence tag is provided with 2-10 numerical gears, depending on the participation. Choose the numerical range closest to the user's actual situation. The emotional valence label represents the degree of positivity and negativity of the emotion. The lowest value represents the most negative value, and the larger the value, the more positive the emotion. The data formats in the two emotional valence tags with the same numerical gear are unified, and the data formats in the two emotional arousal tags with the same numerical gear are unified.
优选地,所述情绪唤醒标签中采用标准化的情绪唤醒分数作为原始标签分数;Preferably, a standardized emotional arousal score is used as the original label score in the emotional arousal label;
优选地,所述情绪效价标签中采用PANAS标准分作为原始标签分数,其中,积极情绪:均分29.7,标准差:7.9;消极情绪:均分14.8,标准差5.4。Preferably, the PANAS standard score is used as the original label score in the emotion valence label, where positive emotion: average score is 29.7, standard deviation: 7.9; negative emotion: average score is 14.8, standard deviation is 5.4.
进一步优选地,在情绪唤醒标签中和情绪效价标签中,都通过数值范围的正负1.96个标准差范围依照数据分布的频数分为10份。Further preferably, in both the emotional arousal label and the emotional valence label, the range of plus or minus 1.96 standard deviations of the numerical range is divided into 10 parts according to the frequency of the data distribution.
优选地,在子步骤2中,情绪标签包括情绪唤醒标签和情绪效价标签,该两个标签可以分别提供,也可以以坐标或者图表的形式同时提供。情绪唤醒标签用于记载情绪唤醒数据,情绪效价标签用于记载情绪效价数据。Preferably, in sub-step 2, the emotion tags include emotional arousal tags and emotional valence tags, and the two tags can be provided separately or simultaneously in the form of coordinates or charts. The emotional arousal tag is used to record emotional arousal data, and the emotional valence tag is used to record emotional valence data.
优选地,在子步骤2中,所述综合神经活动指标与交感神经的活动指标和副交感神经的活动指标相关,每个综合神经活动指标中都包含下述数据中的一种或多种:交感神经的活动指标、副交感神经的活动指标、交感神经的活动指标与副交感神经的活动指标之商、交感神经的活动指标与副交感神经的活动指标之和、交感神经的活动指标与副交感神经的活动指标之差等等。Preferably, in sub-step 2, the comprehensive nerve activity index is related to the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, and each comprehensive nerve activity index contains one or more of the following data: sympathetic Nerve activity index, parasympathetic nerve activity index, quotient of sympathetic nerve activity index and parasympathetic nerve activity index, sum of sympathetic nerve activity index and parasympathetic nerve activity index, sympathetic nerve activity index and parasympathetic nerve activity index The difference and so on.
优选地,所述综合神经活动指标数据的采集频率较高,可以每分钟提供60-90甚至更多组该综合神经活动指标数据,每次采集综合神经活动指标数据时都尽量通过摄像头拍摄当时的面部表情,由于面部表情在模型处理时起到辅助作用,如果面部表情采集的更为全面,模型效果会有所增强。Preferably, the collection frequency of the comprehensive neural activity index data is relatively high, and 60-90 or even more sets of the comprehensive neural activity index data can be provided every minute. Each time the comprehensive neural activity index data is collected, try to capture the current scene through the camera. Facial expressions, because facial expressions play an auxiliary role in model processing, if facial expressions are collected more comprehensively, the model effect will be enhanced.
所述情绪标签的采集频率相对较低,可以每小时采集一次,或者每天采集2-5次,每次采集情绪标签时,都通过摄像头采集当时的面部表情。所以每个情绪标签数据对应有多个综合神经活动指标数据,将一个情绪标签数据、面部表情数据和与之对应的多个综合神经活动指标数据组合在一起即可构成一个基础数据。其中,每个情绪标签数据都包含情绪唤醒数据和情绪效价数据。The collection frequency of the emotion tags is relatively low, and can be collected once an hour, or 2-5 times a day. Each time the emotion tags are collected, the facial expression at that time is collected through the camera. Therefore, each emotion label data corresponds to multiple comprehensive neural activity indicator data. Combining an emotion label data, facial expression data and multiple corresponding comprehensive neural activity indicator data together can form a basic data. Among them, each emotion label data includes emotional arousal data and emotional valence data.
优选地,所述情绪效价标签和情绪唤醒标签中的数值档位可能相同,也可能不同,在数据统计时会出现不匹配或者数据错位的问题,为此,在子步骤3中,调整基础数据的格式主要包括调整情绪标签数据中的数值和数值档位;具体来说,首先设定标准数值档位数量,如设置为5个数值档位,则将基础数据中的数值档位调整为5个,再根据比例将基础数据中选择的档位数值调整为5个数值档位情况下的档位数值,遇到不能整除时向上取整。Preferably, the numerical ranges in the emotional valence tag and the emotional arousal tag may be the same or different. Problems of mismatch or data misalignment may occur during data statistics. For this reason, in sub-step 3, adjust the basis The format of the data mainly includes adjusting the numerical values and numerical gears in the emotion label data; specifically, first set the number of standard numerical gears. For example, if it is set to 5 numerical gears, then adjust the numerical gears in the basic data to 5, and then adjust the gear value selected in the basic data to the gear value in the case of 5 numerical gears according to the proportion, and round up when it is not divisible.
在一个优选的实施方式中,子步骤3中判断统一格式的基础数据是否符合要求包括如下子步骤:In a preferred embodiment, determining whether the basic data in a unified format meets the requirements in sub-step 3 includes the following sub-steps:
亚子步骤1,将所有统一格式的基础数据随机地按照预定比例分成两组,即为学习组和检验组;优选地,所述比例可以为8~9:1,更优选地,学习组中数据数量与检验组中数据数量之比为8:1;Sub-step 1: randomly divide all basic data in a unified format into two groups according to a predetermined ratio, namely the learning group and the testing group; preferably, the ratio can be 8 to 9:1, and more preferably, in the learning group The ratio of the number of data to the number of data in the test group is 8:1;
亚子步骤2,利用学习组中的数据冲刷模型,再用检验组中的每个数据逐一验证该模型,并分别记录检验组中每个数据的验证结果,优选地,所述验证结果包括验证通过和验证不通过;其中,验证通过是指将检验组中统一格式的基础数据的综合神经活动指标数据和面部表情数据带入到模型中,得到的情绪标签数据与该基础数据中的情绪标签数据一致,即情绪激越程度和情绪效价都一致;验证不通过是指将检验组中基础数据的综合神经活动指标数据和面部表情数据带入到模型中,得到的情绪标签数据与该基础数据中的情绪标签数据不一致,即情绪激越程度和/或情绪效价不一致;Sub-step 2, use the data in the learning group to flush the model, then use each data in the test group to verify the model one by one, and record the verification results of each data in the test group respectively. Preferably, the verification results include verification Passed and failed verification; among them, passing verification means bringing the comprehensive neural activity indicator data and facial expression data of the basic data in a unified format in the inspection group into the model, and the obtained emotion label data is consistent with the emotion label in the basic data. The data is consistent, that is, the degree of emotional agitation and emotional valence are consistent; verification failure means that the comprehensive neural activity index data and facial expression data of the basic data in the test group are brought into the model, and the obtained emotion label data is consistent with the basic data The emotion label data in is inconsistent, that is, the degree of emotional agitation and/or the valence of the emotion are inconsistent;
亚子步骤3,多次重复上述亚子步骤1和亚子步骤2,其中,曾经被分配到检验组中的统一格式的基础数据不再被分配到检验组中,确保每个统一格式的基础数据都曾在检验组中对被学习组中数据冲刷过的模型做过验证,直至所有统一格式的基础数据都获得对应的验证结果;Sub-step 3, repeat the above sub-step 1 and sub-step 2 multiple times, in which the basic data in the unified format that was once assigned to the inspection group will no longer be assigned to the inspection group to ensure that the basis of each unified format The data has been verified in the inspection group to verify the model washed by the data in the learning group, until all basic data in a unified format have obtained corresponding verification results;
亚子步骤4,解算所有统一格式的基础数据验证结果的总通过率,所述总通过率为所有统一格式的基础数据的验证结果为验证通过的数量与所有统一格式的基础数据的数量之比;当总通过率不大于85%时,认为这些统一格式的基础数据不符合基本要求,全部放弃,重复子步骤1和子步骤2,重新获得新的基础数据;当亚子步骤4中结果即总通过率大于70%时,认为这些统一格式的基础数据满足使用要求,可以进行下一步处理。Sub-step 4: Calculate the total pass rate of the verification results of all basic data in a unified format. The total pass rate of the verification results of all basic data in a unified format is the number of verification passes and the number of all basic data in a unified format. ratio; when the total pass rate is not greater than 85%, it is considered that these basic data in a unified format do not meet the basic requirements, all are given up, and sub-steps 1 and 2 are repeated to obtain new basic data; when the result in sub-step 4 is When the total pass rate is greater than 70%, it is considered that these basic data in a unified format meet the usage requirements and can be processed in the next step.
在一个优选的实施方式中,子步骤4中得到可用数据包括如下亚子步骤:In a preferred embodiment, obtaining available data in sub-step 4 includes the following sub-steps:
亚子步骤a,以梯度法针对每一个模型-参数组合剔除离群数据,筛选出高生态效用的模型。具体地,多次重复子步骤3中的亚子步骤1-3,并且每次重复亚子步骤1时都得到一个由不同的统一格式的基础数据组成的检验组,即所有的检验组都是不同的;优选地,所述亚子步骤1-3重复8-10次,进而使得每个统一格式的基础数据都对应有多个验证结果,再分别解算每个统一格式的基础数据对应的平均通过率;所述统一格式的基础数据对应的平均通过率为该统一格式的基础数据对应的验证结果中验证通过的数量与该统一格式的基础数据对应的验证结果的总数量之比。In sub-step a, the gradient method is used to eliminate outlier data for each model-parameter combination and screen out models with high ecological utility. Specifically, sub-steps 1-3 in sub-step 3 are repeated multiple times, and each time sub-step 1 is repeated, a test group composed of different basic data in a unified format is obtained, that is, all test groups are Different; preferably, the sub-steps 1-3 are repeated 8-10 times, so that each basic data in a unified format corresponds to multiple verification results, and then each corresponding basic data in a unified format is solved separately. Average pass rate; the average pass rate corresponding to the basic data in a unified format is the ratio of the number of verification results that have passed in the verification results corresponding to the basic data in a unified format to the total number of verification results corresponding to the basic data in a unified format.
亚子步骤b,找到并隐藏1例平均通过率最低的统一格式的基础数据,当存在多例统一格式的基础数据的平均通过率一致且最低时,任意隐藏其中一例即可,被隐藏的数据,在被恢复以前,不再参与任何计算处理;找到并利用剩余的统一格式的基础数据再次执行亚子步骤1-4,观察总通过率相较于隐藏数据前是否提高,如果总通过率提高,则删除该被隐藏的统一格式的基础数据,并执行亚子步骤c;如果总通过率未提高,则恢复被隐藏的数据,挑选并隐藏平均通过率第二低的统一格式的基础数据,其中,如果存在多个统一格式的基础数据的平均通过率相同且最低的情况,此时可以再挑选其他命中率最低的统一格式的基础数据;重复以上过程,直至总通过率提高;Sub-step b, find and hide one case of basic data in a unified format with the lowest average pass rate. When there are multiple cases of basic data in a unified format with the same and lowest average pass rate, just hide any one of them, and the hidden data , before being restored, it will no longer participate in any calculation processing; find and use the remaining basic data in a unified format to perform sub-steps 1-4 again, and observe whether the total pass rate is improved compared to before hiding the data. If the total pass rate is improved , then delete the hidden basic data in a unified format, and perform sub-step c; if the total pass rate does not increase, restore the hidden data, select and hide the basic data in a unified format with the second lowest average pass rate, Among them, if there are multiple basic data in a unified format with the same and lowest average pass rate, then you can select other basic data in a unified format with the lowest hit rate; repeat the above process until the total pass rate increases;
亚子步骤c,在总通过率提高后,以剩余的统一格式的基础数据为基础,重复亚子步骤a和亚子步骤b,发现总通过率提高后再以当前剩余的统一格式的基础数据为基础,继续重复亚子步骤a和亚子步骤b,直至总通过率达到90%以上,优选为92%以上;或者删除的统一格式的基础数据达到总的统一格式的基础数据的30%时为止,此时剩余的统一格式的基础数据即为可用数据。Sub-step c, after the overall pass rate is improved, use the remaining basic data in a unified format as the basis, repeat sub-step a and sub-step b, and find that the overall pass rate is improved, then use the current remaining basic data in a unified format. As the basis, continue to repeat sub-step a and sub-step b until the total pass rate reaches more than 90%, preferably more than 92%; or when the deleted basic data in a unified format reaches 30% of the total basic data in a unified format So far, the remaining basic data in a unified format is available data.
优选地,所述亚子步骤2中的模型包括绝大多数有监督学习的模型,对该模型的冲刷过程包括多个有监督模型的综合判断,其具体冲刷过程包括但不限于采用线性回归、支持向量机、梯度下降法、朴素贝叶斯分类、决策树分类、AdaBoost、XGBoost、多层神经网络等冲刷方法。优选地,利用3-4层结构的多层神经网络、C4.5决策树、XGBoost 3种模型的结果中彼此较为接近的2个结果的平均值作为每次冲刷的输出值,即将3-4层结构的多层神经网络、C4.5决策树、XGBoost组合为最优选的模型,即高生态效用的模型。本申请中优选地,所述神经网络选择一维卷积神经网络。Preferably, the model in sub-step 2 includes most supervised learning models, and the flushing process of the model includes comprehensive judgment of multiple supervised models. The specific flushing process includes but is not limited to linear regression, Support vector machine, gradient descent method, naive Bayes classification, decision tree classification, AdaBoost, XGBoost, multi-layer neural network and other flushing methods. Preferably, the average of two results that are close to each other among the results of the three models of multi-layer neural network with 3-4 layer structure, C4.5 decision tree, and XGBoost is used as the output value of each flush, that is, 3-4 The combination of layer-structured multi-layer neural network, C4.5 decision tree, and XGBoost is the most preferred model, that is, a model with high ecological utility. In this application, preferably, the neural network selects a one-dimensional convolutional neural network.
在步骤5中,获得情绪判断模型的过程中,将每个可用数据中的综合神经活动指标数据、面部表情数据和情绪唤醒数据拼接成一个数据段,作为学习材料,通过机器学习获得情绪唤醒情绪判断模型;In step 5, in the process of obtaining the emotion judgment model, the comprehensive neural activity indicator data, facial expression data and emotional arousal data in each available data are spliced into a data segment, which is used as learning material to obtain the emotional arousal emotion through machine learning. judgment model;
将每个可用数据中的综合神经活动指标数据、面部表情数据和情绪效价数据拼接成一个数据段,作为学习材料,通过机器学习获得情绪效价情绪判断模型;所述情绪判断模型包括所述情绪唤醒预测模型和情绪效价预测模型。The comprehensive neural activity indicator data, facial expression data and emotional valence data in each available data are spliced into a data segment as learning materials, and an emotional valence emotion judgment model is obtained through machine learning; the emotion judgment model includes the above Emotional arousal prediction model and emotional valence prediction model.
优选地,在步骤5中,情绪唤醒预测模型和情绪效价预测模型的学习过程为中,同时使用综合神经活动指标、面部表情数据和标签数据建立冲刷3-4层结构的神经网络、C4.5决策树和XGBoost三种模型,得到多层神经网络模型、决策树模型和XGBoost计算模块模型,将这三种模型的组合作为情绪判断模型,该情绪判断模型的输出为三种模型输出中最为接近的两个输出值的平均值。例如,针对一组数据,三个模型分别给出的输出结果中,一个是8,一个是20,一个是7,输出结果7和输出结果8彼此接近,则最终模型的输出结果7为,即7和8的平均值,并向下取整。Preferably, in step 5, the learning process of the emotional arousal prediction model and the emotional valence prediction model is medium, while using comprehensive neural activity indicators, facial expression data and label data to establish a neural network that flushes the 3-4 layer structure, C4. 5. Three models of decision tree and The average of two close output values. For example, for a set of data, among the output results given by the three models, one is 8, one is 20, and one is 7. The output result 7 and the output result 8 are close to each other, then the output result 7 of the final model is, that is The average of 7 and 8, rounded down.
在一个优选实施方式中,在子步骤1-5中,追踪收集1000名各年龄段的参与者,持续追踪2周至2个月,获得追踪数据。参与者的生理数据来自如智能手表等可穿戴设备及扫描传感器,面部表情数据来自摄像头,评分数据来自于参与者每日的自评;生理数据采用每10分钟采集90秒数据的方式,24小时持续追踪;在情绪唤醒标签和情绪效价标签的评分数据方面要求参与者每天至少3次评估自己的激越程度和情绪效价,每次填写评估情绪唤醒标签和情绪效价标签时都通过摄像头拍摄面部表情照片。In a preferred embodiment, in sub-steps 1-5, 1,000 participants of all ages are tracked and collected for 2 weeks to 2 months to obtain tracking data. Participants' physiological data comes from wearable devices such as smart watches and scanning sensors, facial expression data comes from cameras, and rating data comes from participants' daily self-evaluations; physiological data uses a method of collecting 90 seconds of data every 10 minutes, 24 hours a day. Continuous tracking; in terms of scoring data for emotional arousal labels and emotional valence labels, participants are required to assess their agitation and emotional valence at least three times a day, and each time they fill in the emotional arousal labels and emotional valence labels, they are captured by camera Facial expression photos.
在一个优选的实施方式中,在步骤4中,在已有情绪判断模型的基础上,所述情绪判断模型包括情绪唤醒预测模型和情绪效价预测模型。将存储单元中的心脏搏动间期和面部表情输入到两个模型中,即可得到对应的情绪唤醒和情绪效价。In a preferred embodiment, in step 4, based on the existing emotion judgment model, the emotion judgment model includes an emotion arousal prediction model and an emotion valence prediction model. By inputting the heart beat intervals and facial expressions in the storage unit into the two models, the corresponding emotional arousal and emotional valence can be obtained.
具体来说,首先将该心脏搏动间期RRI转化为综合神经活动指标的交感神经输出和副交感神经输出:Specifically, the cardiac beat interval RRI is first converted into the sympathetic output and parasympathetic output of the comprehensive neural activity index:
利用拉盖尔函数递推式,令其因变量为最切近的一个RRI,令其自变量为8个拉盖尔递推式的分解项X,每一个分解项由一个未知系数G,一个可推断系数φ和一个RRI值组成,整体估计表达式如下式(一)中所示:Use the Laguerre function recursion formula, let its dependent variable be the closest RRI, and let its independent variable be the decomposition term X of 8 Laguerre recursion formulas. Each decomposition term consists of an unknown coefficient G, The inference coefficient φ is composed of an RRI value, and the overall estimated expression is as shown in the following equation (1):
其中,S表示表示j的上限,即拉盖尔多项式的阶数,该阶数决定了利用过去多少个RRI拟合一个表达式,阶数越多,结果越准确,优选地使用9个;j表示正交拉盖尔离散时间函数的阶数;g(j,t)表示结合j阶拉盖尔多项式和t时间范围内RRI间期时间求得的系数矩阵,该系数矩阵中的系数为纳入的每个RRI的系数,目的是将多个RRI并入一个递推的拉盖尔多项式,用过往的RRI拟合最后一次RRI,令多个RRI形成一个递推关系;F(t)表示纳入计算的前后相邻心脏搏动间期序列中具体间期的位置序数;n表示从此次RRI开始,向前回溯的RRI的序号;RRF(t)-n表示任意一个RRI,通过拉盖尔多项式递推获得;表示j阶的正交拉盖尔离散时间函数,通过下式(二)获得;Among them, S represents the upper limit of j, that is, the order of the Laguerre polynomial. This order determines how many past RRIs are used to fit an expression. The more orders, the more accurate the result. It is preferred to use 9; j represents the order of the orthogonal Laguerre discrete time function; g(j,t) represents the coefficient matrix obtained by combining the j-order Laguerre polynomial and the RRI interval time within the t time range. The coefficients in this coefficient matrix are The purpose of each RRI coefficient is to merge multiple RRIs into a recursive Laguerre polynomial, use past RRIs to fit the last RRI, and make multiple RRIs form a recursive relationship; F(t) represents the inclusion The calculated position number of the specific interval in the sequence of adjacent cardiac beat intervals; n represents the serial number of the RRI going back starting from this RRI; RR F(t)-n represents any RRI, through the Laguerre polynomial obtained recursively; Represents the j-order orthogonal Laguerre discrete time function, which is obtained by the following formula (2);
α为常数,其取值为0.2;α is a constant, its value is 0.2;
从最切近的RRI算起,依照时间逆向取8个RRI作为以上的RRI代入求得RRI组合,组成RRI=∑(i∈0-2)Xi+∑(i∈3-8)Xi。利用Kalman自回归求取8个未知系数G。代入∑(i∈0-2)NiGi和∑(i∈3-8)NiGi,分别代表综合神经活动指标中的交感与副交感神经输出值。与之配套的系数N分别使用常数39,10,-5,28,-17,6,12,6,-7,-6,-4。Starting from the closest RRI, take 8 RRIs in reverse time and substitute them into the above RRIs to obtain the RRI combination, forming RRI=∑(i∈0-2)Xi+∑(i∈3-8)Xi. Use Kalman autoregression to obtain 8 unknown coefficients G. Substituting ∑(i∈0-2)NiGi and ∑(i∈3-8)NiGi respectively represent the sympathetic and parasympathetic nerve output values in the comprehensive neural activity index. The matching coefficients N use constants 39, 10, -5, 28, -17, 6, 12, 6, -7, -6, -4 respectively.
将综合神经活动指标和面部表情输带入到情绪唤醒预测模型中,将综合神经活动指标和面部表情带入到情绪效价预测模型中,在两个模型中分别进行如下处理:The comprehensive neural activity index and facial expression input are brought into the emotional arousal prediction model, and the comprehensive neural activity index and facial expression are brought into the emotional valence prediction model. The following processing is performed in the two models:
情绪唤醒预测模型包括3-4层结构的多层神经网络模型、C4.5决策树模型和XGBoost计算模块模型,该情绪唤醒预测模型接收到综合神经活动指标和面部表情后,得到分别由3-4层结构的多层神经网络模型、C4.5决策树模型和XGBoost计算模块模型输出的值,在这三个输出值中挑选出2个较为接近的值,并求取该二个值的平均值,作为情绪唤醒模型的输出结果。The emotional arousal prediction model includes a multi-layer neural network model with a 3-4 layer structure, a C4.5 decision tree model and an XGBoost calculation module model. After receiving comprehensive neural activity indicators and facial expressions, the emotional arousal prediction model obtains 3- For the values output by the 4-layer multi-layer neural network model, C4.5 decision tree model and XGBoost calculation module model, select 2 closer values among these three output values and calculate the average of the two values. value as the output of the emotional arousal model.
情绪效价预测模型也包括3-4层结构的多层神经网络模型、C4.5决策树模型和XGBoost计算模块模型,该情绪效价预测模型接收到综合神经活动指标和面部表情后,得到分别由3-4层结构的多层神经网络模型、C4.5决策树模型和XGBoost计算模块模型输出的值,在这三个输出值中挑选出2个较为接近的值,并求取该二个值的平均值,作为情绪效价预测模型的输出结果。The emotional valence prediction model also includes a 3-4 layer structure multi-layer neural network model, C4.5 decision tree model and XGBoost calculation module model. After receiving comprehensive neural activity indicators and facial expressions, the emotional valence prediction model obtains respectively From the values output by the multi-layer neural network model with a 3-4 layer structure, the C4.5 decision tree model and the XGBoost calculation module model, two closer values are selected from the three output values, and the two values are calculated. The average of the values is used as the output of the emotional valence prediction model.
最后得到对应的情绪唤醒程度和情绪效价程度即为被监测人员的情绪状况。Finally, the corresponding degree of emotional arousal and emotional valence are obtained, which is the emotional state of the person being monitored.
在一个优选的实施方式中,在步骤5中,将被监测人员的情绪状况与平静状态下的平均情绪激越程度值和平均情绪效价值相比较,In a preferred embodiment, in step 5, the emotional state of the monitored person is compared with the average emotional agitation value and the average emotional value value in the calm state,
当监测人员从事重要、高难度或高强度作业时,When monitoring personnel are engaged in important, difficult or high-intensity operations,
其情绪激越程度高于平静状态下的平均情绪激越程度值1.5个标准差,The level of emotional agitation is 1.5 standard deviations higher than the average level of emotional agitation in the calm state.
或其情绪激越程度低于平静状态下的平均情绪激越程度值1个标准差,Or the level of emotional agitation is 1 standard deviation below the average level of emotional agitation in a calm state,
或其情绪效价低于平静状态下的平均情绪效价值1.5个标准差时,发出报警信息;Or when its emotional valence is 1.5 standard deviations lower than the average emotional valence in a calm state, an alarm message is issued;
本申请中所述平静状态是选取至少100个参与者在平静状态下收集到的数据集,是收集到的所有情绪值的集合,对这个数据集求平均值和标准差,从而将该平均值和标准差作为判断依据,将触发报警的临界值设定在平均值加1.5个或者1个标准差处;即,在通过情绪判断模型获得的情绪值中,如果获得的情绪效价值高于平均情绪效价值加1~1.5倍标准差,或者获得的情绪效价值低于平均情绪效价值减1~1.5倍标准差,或者获得情绪激越值高于平均激越值加1~1.5倍标准差,或者获得的情绪激越值低于平均激越值减1~1.5倍标准差,则根据不同的人员情况,选择是否触发报警。The calm state described in this application is a data set collected from at least 100 participants in a calm state. It is a collection of all emotional values collected. The average and standard deviation of this data set are calculated, so that the average value and standard deviation as the basis for judgment, set the critical value for triggering the alarm at the mean plus 1.5 or 1 standard deviation; that is, among the emotion values obtained through the emotion judgment model, if the obtained emotional value value is higher than the average The emotional valence value plus 1 to 1.5 times the standard deviation, or the emotional valence value obtained is lower than the average emotional valence value minus 1 to 1.5 times the standard deviation, or the emotional agitation value obtained is higher than the average agitation value plus 1 to 1.5 times the standard deviation, or If the obtained emotional agitation value is lower than the average agitation value minus 1 to 1.5 times the standard deviation, then it is selected whether to trigger the alarm according to different personnel conditions.
所述重要、高难度或高强度作业是指需要长期处于危险环境下或长期处于可能发生危险的工作环境下,如高空作业、驾驶员、吊车等工程机械操作工作、重要设施如发电厂的控制维护工作等;The important, difficult or high-intensity operations refer to those that require long-term exposure to hazardous environments or potentially hazardous working environments, such as high-altitude operations, operations of construction machinery such as drivers and cranes, and control of important facilities such as power plants. Maintenance work, etc.;
所述普通工作是指即不处于高危或具有较大潜在危险环境且劳动强度一般的职业,如文员、编辑、服务业从业者、学生、教师、图书管理员等,The above-mentioned ordinary jobs refer to occupations that are not in high-risk or potentially hazardous environments and have average labor intensity, such as clerks, editors, service industry practitioners, students, teachers, librarians, etc.
其情绪激越程度高于平静状态下的平均情绪激越程度值1.5个标准差,The level of emotional agitation is 1.5 standard deviations higher than the average level of emotional agitation in the calm state.
或其情绪激越程度低于平静状态下的平均情绪激越程度值1个标准差,Or the level of emotional agitation is 1 standard deviation below the average level of emotional agitation in a calm state,
或其情绪效价低于平静状态下的平均情绪效价值1.5个标准差时,发出报警信息;Or when its emotional valence is 1.5 standard deviations lower than the average emotional valence in a calm state, an alarm message is issued;
当被监测人员情绪激越程度高于平静状态下的平均情绪激越程度值2个标准差,且其情绪效价低于平静状态下的平均情绪效价值2个标准差时,该被监测人员对公共安全产生潜在威胁,发出报警信息。其中,所述潜在威胁主要涉及公共场所可能在过激情绪下危害他人安全的人员,如公交车司机、吊车挖掘机驾驶员、火车站或机场等候室中的旅客等。When the monitored person's emotional agitation is 2 standard deviations higher than the average emotional agitation value in the calm state, and his or her emotional valence is 2 standard deviations lower than the average emotional valence in the calm state, the person being monitored is responsible for the public Potential threats to security occur and an alarm message is issued. Among them, the potential threats mainly involve people who may endanger the safety of others in public places under excessive emotions, such as bus drivers, crane excavator drivers, passengers in train station or airport waiting rooms, etc.
所述平均情绪激越程度值为子步骤2中情绪唤醒标签中代表平静的中间值,平均情绪效价值为子步骤2中情绪效价标签中代表平静的中间值。The average emotional agitation value is the intermediate value representing calmness in the emotional arousal label in sub-step 2, and the average emotional valence value is the intermediate value representing calmness in the emotional valence label in sub-step 2.
实施例Example
建立情绪判断模型,具体来说,选定100个参与者,对所有参与者进行为期1个月的持续追踪,参与者每天都至少12小时生活在摄像头的拍摄范围内,且其中至少8小时能够被摄像头捕获到面部,通过摄像头实时获得参与者的面部表情和面部无毛发部分的亮度变化,实时将所述亮度变化转换为心脏搏动间期数据,再将该心脏搏动间期数据转化为交感神经的活动指标和副交感神经的活动指标,另外,参与者都每天3次在情绪唤醒标签中记录情绪激越程度,在情绪效价标签中记录情绪效价,标签中都包括10个数值档位,其中,参与者每天上午记录其当天上午的平均情绪激越程度和情绪效价,每天下午记录其当天下午的平均情绪激越程度和情绪效价,每天晚上记录其当天晚上的平均情绪激越程度和情绪效价。Establish an emotion judgment model. Specifically, 100 participants are selected and all participants are continuously tracked for a month. The participants live within the shooting range of the camera for at least 12 hours a day, and at least 8 hours of them can The face is captured by the camera, and the participant's facial expression and the brightness changes of the hairless part of the face are obtained in real time through the camera, and the brightness changes are converted into heart beat interval data in real time, and then the heart beat interval data is converted into sympathetic nerves Activity indicators and parasympathetic activity indicators. In addition, participants recorded the degree of emotional agitation in the emotional arousal label and the emotional valence in the emotional valence label three times a day. The labels included 10 numerical gears, including , participants recorded their average emotional agitation and emotional valence in the morning every morning, recorded their average emotional agitation and emotional valence in the afternoon every afternoon, and recorded their average emotional agitation and emotional valence in the evening every evening. .
共得到610820条RRI数据和415128帧面部表情数据,再将RRI数据转换为交感神经的活动指标和副交感神经的活动指标,同时通过收集数据还得到9000条包含情绪唤醒标签和情绪效价标签的记录,将一个情绪标签数据和与之对应的多个综合神经活动指标数据组合成一个基础数据,共形成9000个基础数据,A total of 610,820 pieces of RRI data and 415,128 frames of facial expression data were obtained. The RRI data were then converted into sympathetic nerve activity indicators and parasympathetic nerve activity indicators. At the same time, 9,000 records containing emotional arousal labels and emotional valence labels were obtained through data collection. , combine an emotion label data and multiple corresponding comprehensive neural activity index data into one basic data, forming a total of 9000 basic data.
求取总通过率,将全部9000条基础数据随机分为9份,其中一份作为检验组,其他份作为学习组,通过学习组冲刷模型,再用检验组中的数据验证该模型,得到每个检验组数据的验证结果,再用其他份中的数据作为检验组,重复上述步骤,共重复循环9次,确保每个数据都曾分配到过检验组中,即每个数据都得到对应的验证结果,求得总通过率为88%,高于85%,可以进行下一步处理。To obtain the total pass rate, all 9000 pieces of basic data are randomly divided into 9 parts, one part is used as the inspection group, and the other parts are used as the learning group. The model is flushed through the learning group, and then the data in the inspection group is used to verify the model, and each part is obtained. Verify the results of the test group data, and then use the data in other parts as the test group. Repeat the above steps for a total of 9 times to ensure that each data has been assigned to the test group, that is, each data has the corresponding Verify the results and find that the total pass rate is 88%. If it is higher than 85%, you can proceed to the next step.
剔除该基础数据中的异常数据,即得到可用数据,具体来说,Eliminate abnormal data in the basic data to obtain usable data. Specifically,
求取平均通过率,将所有基础数据重新分成9份,其中一份作为检验组,其他份作为学习组,通过学习组冲刷模型,再用检验组中的数据验证该模型,得到每个数据的验证结果;再重新分配检验组和学习组,至少重复81次上述过程,确保每个基础数据都至少9次被分入到检验组中,即每个基础数据都得到了9个对应的验证结果,进而获得每个基础数据的平均通过率;Find the average passing rate, re-divide all basic data into 9 parts, one part is used as the inspection group, and the other parts are used as the learning group. The model is flushed through the learning group, and then the data in the inspection group is used to verify the model, and the value of each data is obtained. Verify the results; then redistribute the inspection group and learning group, and repeat the above process at least 81 times to ensure that each basic data is divided into the inspection group at least 9 times, that is, each basic data has obtained 9 corresponding verification results. , and then obtain the average passing rate of each basic data;
找到并隐藏1例平均通过率最低的基础数据,利用剩余8999条基础数据再次执行上述求取平均通过率和总通过率的过程,观察总通过率相较于隐藏数据前是否提高,如果总通过率提高,则删除该被隐藏的统一格式的基础数据;如果总通过率未提高,则恢复被隐藏的数据,挑选并隐藏平均通过率第二低的基础数据,重复上述求取总通过率的过程,直至总通过率提高;Find and hide the basic data of 1 case with the lowest average pass rate, use the remaining 8999 pieces of basic data to perform the above process of calculating the average pass rate and the total pass rate again, and observe whether the total pass rate has improved compared with before hiding the data. If the total pass rate If the pass rate increases, delete the hidden basic data in a unified format; if the total pass rate does not improve, restore the hidden data, select and hide the basic data with the second lowest average pass rate, and repeat the above formula to calculate the total pass rate process until the overall pass rate increases;
在命中率有所上升后,删除隐藏数据,以剩余的基础数据为基础,继续执行上述求取平均通过率的过程,解算每个基础数据对应的平均通过率,寻找并隐藏平均通过率最低的数据,再在平均通过率最低的数据的基础上求取总通过率,持续重复上述剔除过程。After the hit rate increases, delete the hidden data, continue to perform the above process of finding the average pass rate based on the remaining basic data, calculate the average pass rate corresponding to each basic data, and find and hide the lowest average pass rate data, and then calculate the total pass rate based on the data with the lowest average pass rate, and continue to repeat the above elimination process.
在命中率有所上升后,删除隐藏数据,以剩余的基础数据为基础,继续重复上述过程。After the hit rate increases, delete the hidden data and continue to repeat the above process based on the remaining basic data.
在删除数据达到2700条时停止,剩余的数据即为可用数据。It stops when the deleted data reaches 2700, and the remaining data is the available data.
根据可用数据获得情绪唤醒预测模型和情绪效价预测模型,具体来说,Emotional arousal prediction models and emotional valence prediction models are obtained based on the available data. Specifically,
利用可用数据冲刷一维卷积神经网络得到一维卷积神经网络模型,利用可用数据冲刷C4.5决策树得到C4.5决策树模型,利用可用数据冲刷XGBoost计算模块得到XGBoost计算模块模型,该三个模型组合形成情绪判断模型;当该情绪判断模型接收到新的综合神经活动指标和面部表情信息时,将接收到的信息复制为3份,分别传输给一维卷积神经网络模型、C4.5决策树模型和XGBoost计算模块模型,该情绪判断模型的输出值为三个模型给出的3个模型输出中2个较为接近的值的均值,从而得到情绪唤醒预测模型和情绪效价预测模型,即得到情绪判断模型。Use the available data to flush the one-dimensional convolutional neural network to get the one-dimensional convolutional neural network model. Use the available data to flush the C4.5 decision tree to get the C4.5 decision tree model. Use the available data to flush the XGBoost computing module to get the XGBoost computing module model. The three models are combined to form an emotion judgment model; when the emotion judgment model receives new comprehensive neural activity indicators and facial expression information, it copies the received information into three copies and transmits them to the one-dimensional convolutional neural network model and C4 respectively. .5 Decision tree model and XGBoost calculation module model. The output value of the emotion judgment model is the mean of two closer values among the three model outputs given by the three models, thus obtaining the emotional arousal prediction model and emotional valence prediction. model, that is, the emotion judgment model is obtained.
选择50名工作在核电站主控制室工作的被监测人员,通过摄像头实时拍摄获得包含被监测人员面部的图像照片;Select 50 monitored personnel working in the main control room of the nuclear power plant, and obtain images and photos containing the faces of the monitored personnel through real-time camera shooting;
通过人脸识别区分图像照片中每个被监测人员的身份信息,并为每个被监测人员设置对应的独立存储单元;Distinguish the identity information of each monitored person in images and photos through facial recognition, and set up a corresponding independent storage unit for each monitored person;
通过读取图像照片获得被监测人员的心脏搏动间期和面部表情,并都存储在独立存储单元中;The heart beat intervals and facial expressions of the monitored person are obtained by reading the image photos, and are stored in an independent storage unit;
实时将心脏搏动间期和面部表情输入到情绪判断模型中判断被监测人员的情绪状况;Input heart beat intervals and facial expressions into the emotion judgment model in real time to determine the emotional status of the person being monitored;
持续12小时以后,得到这50名被监测人员在12小时内的情绪变化情况,其中一人的情绪状况如图2和图3中所示;其中,图2示出该被监测人员的情绪唤醒即情绪激越程度的变化曲线,图中的横坐标表示时间,纵坐标表示情绪激越程度,数值越高代表激越程度越强烈;图2中,位于中间的虚线表示平均情绪激越值;After continuing for 12 hours, the emotional changes of the 50 monitored persons within 12 hours were obtained. The emotional status of one of them is shown in Figures 2 and 3; among them, Figure 2 shows the emotional arousal of the monitored person, that is, The change curve of the degree of emotional agitation. The abscissa in the figure represents time, and the ordinate represents the degree of emotional agitation. The higher the value, the stronger the degree of agitation. In Figure 2, the dotted line in the middle represents the average emotional agitation value;
图3示出该被监测人员的情绪效价的变化曲线,图中的横坐标表示时间,纵坐标表示情绪效价值,数值越高代表情绪效价越大;图3中,位于中间的虚线表示平均情绪效价值;Figure 3 shows the change curve of the monitored person's emotional valence. The abscissa in the figure represents time, and the ordinate represents the emotional valence. The higher the value, the greater the emotional valence; in Figure 3, the dotted line in the middle represents average emotional valence value;
50名被监测人员的情绪激越程度都一直处于平均激越程度值加减0.7个标准差范围内,其情绪效价保持在平静状态下的平均情绪效价值加减1个标准差范围内,不必发出报警信息。The emotional agitation level of the 50 monitored persons has always been within the range of plus or minus 0.7 standard deviations of the average agitation value, and their emotional valence remains within the range of plus or minus 1 standard deviation of the average emotional valence value in the calm state. There is no need to issue Alarm information.
再请被监测人员自我评价其在14小时内的情绪变化状况,图4示出一位被监测人员的自我评价,其中,评分方式采用滑动条界面,具体评价方案为:在起床后6个小时以内给出自身上午时段的情绪状况评价值,即给出从起床至第一次给出评价这一时间段内的情绪评价值;在起床后6-10个小时以内给出自身在下午时段的情绪状况评价值,即给出从第一次给出评价至第二次给出评价这一时间段内的情绪评价值;在起床后10以后至睡觉前给出晚上时段的情绪状况评价值,即给出从第二次给出评价至第三次给出评价这一时间段内的情绪评价值;The monitored person is then asked to self-evaluate his/her emotional changes within 14 hours. Figure 4 shows the self-evaluation of a monitored person. The scoring method uses a sliding bar interface. The specific evaluation plan is: 6 hours after getting up. Give your own emotional state evaluation value in the morning period within 6-10 hours after getting up. The emotional state evaluation value is the emotional state evaluation value in the period from the first evaluation to the second evaluation; the emotional state evaluation value is given in the evening period from 10 days after getting up to before going to bed. That is, the emotional evaluation value during the period from the second evaluation to the third evaluation is given;
统计50名被监测人员的自我评价情绪状况,并将之与情绪判断模型获得的情绪状况做比较,发现其匹配率达到85%。The self-evaluated emotional status of 50 monitored persons was counted and compared with the emotional status obtained by the emotional judgment model, and it was found that the matching rate reached 85%.
通过上述结果可知,本申请提供的基于摄像头的情绪预警方法能够及时准确地判断出被监测者的情绪变化状况。It can be seen from the above results that the camera-based emotion warning method provided by this application can timely and accurately determine the emotional changes of the monitored person.
以上结合了优选的实施方式对本发明进行了说明,不过这些实施方式仅是范例性的,仅起到说明性的作用。在此基础上,可以对本发明进行多种替换和改进,这些均落入本发明的保护范围内。The present invention has been described above with reference to preferred embodiments, but these embodiments are only exemplary and serve an illustrative purpose. On this basis, various substitutions and improvements can be made to the present invention, which all fall within the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110352232.3A CN113143274B (en) | 2021-03-31 | 2021-03-31 | Camera-based emotional early warning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110352232.3A CN113143274B (en) | 2021-03-31 | 2021-03-31 | Camera-based emotional early warning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113143274A CN113143274A (en) | 2021-07-23 |
CN113143274B true CN113143274B (en) | 2023-11-10 |
Family
ID=76886333
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110352232.3A Active CN113143274B (en) | 2021-03-31 | 2021-03-31 | Camera-based emotional early warning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113143274B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114241565B (en) * | 2021-12-15 | 2025-05-27 | 北京易华录信息技术股份有限公司 | A method, device and equipment for analyzing facial expression and target object state |
CN115316991B (en) * | 2022-01-06 | 2024-02-27 | 中国科学院心理研究所 | Self-adaptive recognition early warning method for irritation emotion |
CN114407832A (en) * | 2022-01-24 | 2022-04-29 | 中国第一汽车股份有限公司 | Monitoring method for preventing vehicle body from being scratched and stolen, vehicle body controller and vehicle |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101095612A (en) * | 2006-06-28 | 2008-01-02 | 株式会社东芝 | Apparatus and method for monitoring biological information |
JP2012059107A (en) * | 2010-09-10 | 2012-03-22 | Nec Corp | Emotion estimation device, emotion estimation method and program |
CN104112055A (en) * | 2013-04-17 | 2014-10-22 | 深圳富泰宏精密工业有限公司 | System and method for analyzing and displaying emotion |
CN107506716A (en) * | 2017-08-17 | 2017-12-22 | 华东师范大学 | A kind of contactless real-time method for measuring heart rate based on video image |
CN108882883A (en) * | 2015-12-09 | 2018-11-23 | 安萨尔集团有限公司 | Parasympathetic autonomic nerves system is measured to while sympathetic autonomic nerves system to independent activities, related and analysis method and system |
CN109670406A (en) * | 2018-11-25 | 2019-04-23 | 华南理工大学 | A kind of contactless emotion identification method of combination heart rate and facial expression object game user |
CN109890289A (en) * | 2016-12-27 | 2019-06-14 | 欧姆龙株式会社 | Mood estimates equipment, methods and procedures |
CN110200640A (en) * | 2019-05-14 | 2019-09-06 | 南京理工大学 | Contactless Emotion identification method based on dual-modality sensor |
CN110422174A (en) * | 2018-04-26 | 2019-11-08 | 李尔公司 | Biometric sensor is merged to classify to Vehicular occupant state |
CN110621228A (en) * | 2017-05-01 | 2019-12-27 | 三星电子株式会社 | Determining emotions using camera-based sensing |
CN111881812A (en) * | 2020-07-24 | 2020-11-03 | 中国中医科学院针灸研究所 | Multi-modal emotion analysis method and system based on deep learning for acupuncture |
CN112220455A (en) * | 2020-10-14 | 2021-01-15 | 深圳大学 | Emotion recognition method and device based on video electroencephalogram signals and computer equipment |
CN112263252A (en) * | 2020-09-28 | 2021-01-26 | 贵州大学 | A PAD emotion dimension prediction method based on HRV features and three-layer SVR |
CN112507959A (en) * | 2020-12-21 | 2021-03-16 | 中国科学院心理研究所 | Method for establishing emotion perception model based on individual face analysis in video |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140221866A1 (en) * | 2010-06-02 | 2014-08-07 | Q-Tec Systems Llc | Method and apparatus for monitoring emotional compatibility in online dating |
TWI510216B (en) * | 2013-04-15 | 2015-12-01 | Chi Mei Comm Systems Inc | System and method for displaying analysis of mood |
US10285634B2 (en) * | 2015-07-08 | 2019-05-14 | Samsung Electronics Company, Ltd. | Emotion evaluation |
JP6985005B2 (en) * | 2015-10-14 | 2021-12-22 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | Emotion estimation method, emotion estimation device, and recording medium on which the program is recorded. |
TW201801037A (en) * | 2016-06-30 | 2018-01-01 | 泰金寶電通股份有限公司 | Emotion analysis method and electronic apparatus thereof |
JP7251392B2 (en) * | 2019-08-01 | 2023-04-04 | 株式会社デンソー | emotion estimation device |
-
2021
- 2021-03-31 CN CN202110352232.3A patent/CN113143274B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101095612A (en) * | 2006-06-28 | 2008-01-02 | 株式会社东芝 | Apparatus and method for monitoring biological information |
JP2012059107A (en) * | 2010-09-10 | 2012-03-22 | Nec Corp | Emotion estimation device, emotion estimation method and program |
CN104112055A (en) * | 2013-04-17 | 2014-10-22 | 深圳富泰宏精密工业有限公司 | System and method for analyzing and displaying emotion |
CN108882883A (en) * | 2015-12-09 | 2018-11-23 | 安萨尔集团有限公司 | Parasympathetic autonomic nerves system is measured to while sympathetic autonomic nerves system to independent activities, related and analysis method and system |
CN109890289A (en) * | 2016-12-27 | 2019-06-14 | 欧姆龙株式会社 | Mood estimates equipment, methods and procedures |
CN110621228A (en) * | 2017-05-01 | 2019-12-27 | 三星电子株式会社 | Determining emotions using camera-based sensing |
CN107506716A (en) * | 2017-08-17 | 2017-12-22 | 华东师范大学 | A kind of contactless real-time method for measuring heart rate based on video image |
CN110422174A (en) * | 2018-04-26 | 2019-11-08 | 李尔公司 | Biometric sensor is merged to classify to Vehicular occupant state |
CN109670406A (en) * | 2018-11-25 | 2019-04-23 | 华南理工大学 | A kind of contactless emotion identification method of combination heart rate and facial expression object game user |
CN110200640A (en) * | 2019-05-14 | 2019-09-06 | 南京理工大学 | Contactless Emotion identification method based on dual-modality sensor |
CN111881812A (en) * | 2020-07-24 | 2020-11-03 | 中国中医科学院针灸研究所 | Multi-modal emotion analysis method and system based on deep learning for acupuncture |
CN112263252A (en) * | 2020-09-28 | 2021-01-26 | 贵州大学 | A PAD emotion dimension prediction method based on HRV features and three-layer SVR |
CN112220455A (en) * | 2020-10-14 | 2021-01-15 | 深圳大学 | Emotion recognition method and device based on video electroencephalogram signals and computer equipment |
CN112507959A (en) * | 2020-12-21 | 2021-03-16 | 中国科学院心理研究所 | Method for establishing emotion perception model based on individual face analysis in video |
Non-Patent Citations (4)
Title |
---|
International Journal of Distributed Sensor Networks.Photoplethysmography based psychological stress detection with pulse rate variability feature differences and elastic net.《International Journal of Distributed Sensor Networks》.2018,第1-14页. * |
孔璐璐.基于面部表情和脉搏信息融合的驾驶人愤怒情绪研究.《中国优秀硕士学位论文全文数据库》.2014,第I138-308页. * |
李昌竹,郑士春,陆梭等.心率变异性与人格的神经质之间关系研究.《心理与行为研究》.2020,275-280. * |
陈明.《大数据技术概论》.北京:中国铁道出版社,2019,第120-121页. * |
Also Published As
Publication number | Publication date |
---|---|
CN113143274A (en) | 2021-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113143274B (en) | Camera-based emotional early warning method | |
Kocielnik et al. | Smart technologies for long-term stress monitoring at work | |
Shastri et al. | Perinasal imaging of physiological stress and its affective potential | |
US20150305662A1 (en) | Remote assessment of emotional status | |
CN114792553B (en) | A method and system for screening student mental health groups | |
CN113499035B (en) | A Pain Recognition System Based on Confidence Interval Fusion Threshold Criterion | |
CN109222888A (en) | A method of psychological test reliability is judged based on eye movement technique | |
Yu et al. | Air traffic controllers' mental fatigue recognition: A multi-sensor information fusion-based deep learning approach | |
CN106295986A (en) | Health detection based on intelligent mobile terminal management system | |
CN113647950A (en) | Psychological emotion detection method and system | |
WO2021146368A1 (en) | Artificial intelligence-based platform to optimize skill training and performance | |
CN113057599A (en) | Machine for rapidly evaluating pain | |
KR20190142618A (en) | Method for monitoring cardiac impulse of fetus and apparatus therefor | |
CN115607153A (en) | Psychological scale answer quality evaluation system and method based on eye movement tracking | |
EP3529764A1 (en) | Device for determining features of a person | |
Bonyad et al. | The relation between mental workload and face temperature in flight simulation | |
Li et al. | A deep cybersickness predictor through kinematic data with encoded physiological representation | |
Huang et al. | Automatic recognition of schizophrenia from facial videos using 3D convolutional neural network | |
Georges et al. | Emotional maps for user experience research in the wild | |
CN113362951A (en) | Human body infrared thermal structure attendance and health assessment and epidemic prevention early warning system and method | |
Kavitha et al. | A novel approach for driver drowsiness detection using deep learning | |
CN107067152A (en) | A kind of fatigue recovery Index Monitoring device and method analyzed based on HRV | |
CN114098729B (en) | Objective measurement method of emotional state based on cardiac interval | |
CN119028587A (en) | Home health risk monitoring method and hierarchical management system based on AI smart devices | |
CN211749663U (en) | Staff emotion prediction system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220311 Address after: 100101 courtyard 16, lincui Road, Chaoyang District, Beijing Applicant after: INSTITUTE OF PSYCHOLOGY, CHINESE ACADEMY OF SCIENCES Address before: 101400 3rd floor, 13 Yanqi street, Yanqi Economic Development Zone, Huairou District, Beijing Applicant before: Beijing JingZhan Information Technology Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |