CN112699785A - Group emotion recognition and abnormal emotion detection method based on dimension emotion model - Google Patents
Group emotion recognition and abnormal emotion detection method based on dimension emotion model Download PDFInfo
- Publication number
- CN112699785A CN112699785A CN202011601643.3A CN202011601643A CN112699785A CN 112699785 A CN112699785 A CN 112699785A CN 202011601643 A CN202011601643 A CN 202011601643A CN 112699785 A CN112699785 A CN 112699785A
- Authority
- CN
- China
- Prior art keywords
- emotion
- group
- abnormal
- dimension
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 178
- 230000002159 abnormal effect Effects 0.000 title claims abstract description 40
- 238000001514 detection method Methods 0.000 title claims abstract description 28
- 230000008909 emotion recognition Effects 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 46
- 230000008859 change Effects 0.000 claims abstract description 26
- 230000006399 behavior Effects 0.000 claims abstract description 19
- 238000013507 mapping Methods 0.000 claims abstract description 8
- 230000001149 cognitive effect Effects 0.000 claims abstract description 5
- 230000002996 emotional effect Effects 0.000 claims description 35
- 230000006870 function Effects 0.000 claims description 30
- 230000014509 gene expression Effects 0.000 claims description 24
- 238000000605 extraction Methods 0.000 claims description 22
- 238000002372 labelling Methods 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 18
- 230000003287 optical effect Effects 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000012706 support-vector machine Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 claims description 7
- 230000008921 facial expression Effects 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 230000000452 restraining effect Effects 0.000 claims description 3
- 230000035945 sensitivity Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000000844 transformation Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- NIOPZPCMRQGZCE-WEVVVXLNSA-N 2,4-dinitro-6-(octan-2-yl)phenyl (E)-but-2-enoate Chemical compound CCCCCCC(C)C1=CC([N+]([O-])=O)=CC([N+]([O-])=O)=C1OC(=O)\C=C\C NIOPZPCMRQGZCE-WEVVVXLNSA-N 0.000 claims 1
- 238000013480 data collection Methods 0.000 abstract description 2
- 230000037007 arousal Effects 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 230000002950 deficient Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000036626 alertness Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a group emotion recognition and abnormal emotion detection method based on a dimension emotion model, which relates to the technical field of intelligent emotion recognition, and is characterized in that a video data set of group emotion is created through data collection and manual marking based on a cognitive psychology PAD three-dimensional emotion model, and the position relation of six typical emotions in a PAD space is disclosed; creating an emotion prediction model based on group behaviors, and mapping group motion characteristics into three-dimensional coordinates in a PAD space; and constructing an abnormal emotion classifier, and judging that the scene has an abnormal state when two abnormal emotions, namely anger and fear, are detected. Aiming at group motion videos, the method and the device can accurately express the continuous change state of group emotion and can effectively identify the global abnormal state.
Description
Technical Field
The invention relates to the technical field of intelligent emotion recognition, in particular to a group emotion recognition and abnormal emotion detection method based on a dimension emotion model.
Background
In recent years, with the continuous development of artificial intelligence, deep learning, psychological science and cognitive science, the computer is used for identifying, understanding, expressing and communicating human emotions, so that the computer has more comprehensive and higher-level intelligent degree and is more and more attracted by extensive attention and deep exploration in academia. For the intelligent video monitoring technology, the language communication, the facial expression and the limb movement of the crowd in the scene are collected, the joy, anger and sadness of the crowd are understood and experienced, the emotional state and the internal intention of the crowd are analyzed, the next action attempt is deduced, the computer makes corresponding feedback, and therefore the intelligent video monitoring technology has the communication capability of the emotional level. As one of the important development directions of the future intelligent monitoring technology, the emotion analysis and emotion recognition method based on vision has very important academic research value.
Of course, there are many ways in which human information media can convey emotion, including words, language, facial expressions, body behaviors, etc. Although both speech and facial expressions can abundantly express human emotions, speech signals are difficult to clearly capture in noisy public places. Meanwhile, in consideration of the high crowdedness and dynamic change of dense scenes, it is difficult for the existing video analysis technology to accurately locate each person's face from a crowded crowd and accurately extract facial expressions. Therefore, emotion analysis based on face tracking and face feature extraction is difficult to achieve ideal effects in dense scenes. Therefore, a feasible approach is to identify and evaluate the emotional state of the population by analyzing the body behaviors of the population in the monitoring video.
It is worth noting that currently, for emotional analysis of limb movement in academic communities, individual individuals are often used as research objects, and the emphasis is placed on mining and identifying individual posture features and emotional expressions thereof. However, unlike individual exercise, group behaviors have their unique internal structures and abundant external forms under the combined action of subjective factors, environmental factors, social factors and psychological factors. On one hand, the individuals communicate and cooperate with each other through information, so that the group presents certain tendency and integrity; on the other hand, the movement of individuals has certain autonomy and randomness, and the group shows certain disordering and unstructured characteristics. From the perspective of social psychology, in a dense crowd scene, the individual psychology is influenced by the surrounding environment, certain independence is lost, certain dependence is formed on the companions, the emotional state of the companions gradually tends to be consistent with the crowd, and a collective and subordinate psychological state is formed. Therefore, in consideration of the specificity of dense scenes and the uniqueness of group psychology, it is necessary to explore specific methods and strategies for analyzing group emotional states.
Emotion recognition methods based on group behaviors are currently mainly classified into two types: the method comprises a discrete model-based identification method and a basic A-V two-dimensional emotion model identification method. However, both of these methods have some disadvantages. First, unlike a simple piece of speech or an image, the content presented by surveillance video is very rich. The method has active group movement, complex group emotion and certain plot change. Therefore, the discrete emotion model can only identify a few typical scenes with single shapes and high recognition, and the covered specific emotion types are limited and deficient for dense people. In addition, the group feelings have many subtle features, and are manifested as a combination of multiple emotions. And emotions can change continuously over time. The characteristics of group emotion cannot be effectively expressed by a discrete model. Second, the A-V two-dimensional emotional model is measured primarily from two dimensions, Arousal and Valence. Wherein, Arousal reflects the intensity of the emotional state, and Valence reflects the type of the emotional state. However, the description form of two dimensions is still slightly simpler compared with a three-dimensional emotion model, for example, the document adopts an A-V two-dimensional emotion model, and only four emotion categories are distinguished. For complex group emotions, this is clearly insufficient. Third, the A-V emotion model cannot distinguish certain emotions (e.g., anger and fear both belong to the higher-dominance emotion of Arousal), but the PAD three-dimensional emotion model can effectively distinguish (anger belongs to the higher-dominance emotion and fear belongs to the lower-dominance emotion).
In order to solve the problems, the application provides a group emotion recognition and abnormal emotion detection method based on a dimension emotion model, and the PAD dimension model is used as a basis to express the group emotion as a three-dimensional coordinate point in an emotion space so as to realize accurate expression of complex emotion.
Disclosure of Invention
The invention aims to provide a group emotion recognition and abnormal emotion detection method based on a dimension emotion model, which is based on a PAD dimension model and expresses group emotion as a three-dimensional coordinate point in an emotion space so as to realize accurate expression of complex emotion.
The invention provides a group emotion recognition and abnormal emotion detection method based on a dimension emotion model, which comprises the following steps of:
s1: establishing a PAD three-dimensional emotion model based on group emotion: the method comprises three dimensions of a pleasure degree P, an activation degree A and an dominance degree D, wherein the value of each dimension is between-1 and +1, and a PAD emotion scale is set for reference of emotion dimensions;
s2: establishing a group behavior and group emotion data set: acquiring a standard video data set by a manual marking strategy based on a cognitive psychology principle aiming at video data of different scenes;
s3: and (3) counting a group emotion data set: according to a standard video data set, defining emotion types of videos, marking the videos as videos with the same emotion, normalizing PAD values of the videos to be between [ -1, 1], and determining values of the emotion in PAD space by calculating central points of coordinates;
s4: evaluating the group emotion data set: whether the labeling data are consistent or not is checked, whether the labeling data obey Gaussian distribution or not is verified and analyzed by adopting a Normplot function in a Matlab tool, and if the labeling data are not obey Gaussian distribution, the output image is bent;
s5: group emotion recognition and abnormal emotion detection: extracting group motion characteristics from the video, and expressing layer semantics in group motion;
s6: extracting and regressing the population emotional characteristics: obtaining a regression function by using a Support Vector Regression (SVR) under the support of a training data set by searching an optimal hyperplane on the basis of restraining the minimization of the structured risk;
s7: detection of abnormal emotional states: and taking the PAD value of each marked sample as an input, and training by a Support Vector Machine (SVM).
Further, in step S2, an emotion labeling system is designed according to the manual labeling strategy, and the system represents a P-dimensional value by the facial expression of the character model, represents an a-dimensional value by the vibration degree of the heart, and represents a D-dimensional value by the size of the small person.
Further, the method for determining consistency in step S4 is as follows: calculating a variation coefficient, and counting and evaluating three indexes of a sample mean value mu, a sample standard deviation sigma and a variation coefficient CV of the PAD data, wherein the variation coefficient is defined as:
if the variation coefficient is small, the consistency of the verification marking data is low; otherwise, the consistency of the verification marking data is high.
Further, the step S5 of extracting group motion features includes extraction of foreground regions, extraction of optical flow features, extraction of trajectory features, and graphical expression of motion features; the foreground regionThe domain extraction adopts an improved ViBE + algorithm, and the foreground region of the t-th frame is detected to be represented as Rt(ii) a The extraction of the optical flow characteristics adopts a dense optical flow field of GunnerFarnenback to carry out visual expression, and for the t frame image, the optical flow offsets of pixel points (x, y) in the transverse direction and the longitudinal direction are u and v respectively; the extraction of the track features adopts iDT algorithm, carries out dense collection on video pixel points, and judges the position of a tracking point in the next frame through optical flow, thereby forming a tracking track which is expressed as T (p)1,p2…pL) Wherein L is less than or equal to 15; the graphical expression of the motion characteristics adopts three graphical characteristic expression forms of a global motion intensity graph, a global motion directional diagram and a global motion trail graph.
Further, each track in the global motion track graph is represented by a solid line, and each track comprises three attribute features<T(p1,p2…pL),L,gi>(ii) a Wherein, T (p)1,p2…pL) Representing a number of tracking points p constituting a trajectoryiL represents the length of the track, g ∈ [0,255 ∈ [ ], L represents the length of the track]Representing the gray value of the i-th segment in the track, giIs represented as follows:
wherein i belongs to [1, L-1 ].
Further, the expression of the level semantics in the group motion in the step S5 is deeply analyzed by using a gray level co-occurrence matrix, and the adopted statistics include variance, contrast, second moment, entropy, correlation and reciprocal difference moment;
the variance is used for reflecting the gray level change degree of the image, when the variance is larger, the gray level change of the image is larger, and the calculation formula of the variance is as follows:
the contrast is used for measuring the value distribution of the matrix and the local variation in the image and reflecting the definition of the image and the depth of the texture, and a calculation formula of the contrast is as follows:
the second moment is used for measuring the gray change stability of the image texture and reflecting the gray distribution uniformity and texture thickness of the image, and if the value of the second moment is larger, the texture mode with uniform and regular change is indicated, and the calculation formula of the second moment is as follows:
the entropy is used for measuring the randomness of the information content of the image and reflecting the complexity of the gray level distribution of the image, and the calculation formula of the entropy is as follows:
the correlation is used for measuring the similarity of the elements of the space gray level co-occurrence matrix in the row or column direction and reflecting the consistency of image textures, and a calculation formula of the correlation is as follows:
the reciprocal difference moment is used for reflecting the homogeneity of the image texture and measuring the local change of the image texture, if the value is large, the change is absent among different areas of the image texture, the local uniformity is realized, and the calculation formula of the reciprocal difference moment is as follows:
further, the regression function of step S6 is as follows:
wherein, omega is a weight vector, C is a balance coefficient,ξiin order to be a function of the relaxation variable,for non-linear transformations mapping data to high dimensional space, b is the bias term and ε is the sensitivity;
introducing a lagrange multiplier, equation (10) translates to:
the regression function finally found was:
wherein, k (x, x)i) Is a kernel function;
adopting a radial basis kernel function RBF, and the expression is as follows:
k(xi,xj)=exp(-||xi-xj||2/2σ2) (13)
and obtaining a regression model after training to realize dimension emotion prediction, predicting a continuous value of each video section in the PAD space, and when the group emotion changes along with time, expressing the group emotion as a continuous three-dimensional track so as to present a gradual emotion process.
Further, the detection of the abnormal emotional state in step S7 obtains a quadratic equation of the SVM hyperplane, which is expressed as:
wherein s.t.wTΦ(xi)≥ρ-ξi,ξi≥0,xiTraining set data representing i ═ {1,2, … N }; w is aTΦ(xi) -0 max decision hyperplane; xiiA relaxation variable that penalizes outliers; v is an element (0, 1)]Is a percentage estimate; phi (-) is a nonlinear equation for mapping training data to a high-dimensional feature space; further, the kernel function is defined as k (x)i,xj)=<Φ(xi),Φ(xj)>Performing point multiplication operation in the feature space, and adopting a Gaussian kernel function, wherein the decision function is defined as:
compared with the prior art, the invention has the following remarkable advantages:
the method comprises the steps of firstly, applying a three-dimensional emotion model to group emotion recognition under a dense crowd scene for the first time, and representing group emotion as a three-dimensional coordinate point in an emotion space on the basis of a PAD dimension model so as to realize accurate expression of complex emotion.
Secondly, a dimension emotion data set facing to group behaviors is created for the first time, and coordinates and connection of various emotions in a three-dimensional emotion space are disclosed through a manual labeling and statistical analysis method, so that a data base is laid for subsequent emotion analysis.
Thirdly, a series of methods for extracting emotional features from group motion are provided. Under the relevant definition of dimension emotion, through support vector regression, an abstract process and a mapping method from motion to emotion are constructed.
Fourth, both feelings of startle and anger are defined as abnormal feelings. By identifying these two emotions, it can be judged that an abnormal state has occurred in the scene. Therefore, a novel solution is developed for the intelligent detection of scenes from the perspective of emotion recognition.
Drawings
Fig. 1 is a diagram for detecting group abnormal emotion based on a UMN and PET2009 data set according to an embodiment of the present invention;
FIG. 2 is a diagram of data analysis of PAD dimension for video segments according to an embodiment of the present invention;
FIG. 3 is a flowchart of group emotion recognition and abnormal state detection provided by an embodiment of the present invention;
FIG. 4 is a diagram of extraction of group motion features and middle level semantic representation provided by an embodiment of the present invention;
FIG. 5 is a diagram illustrating the effect of GMIC, GMOC and GMTC provided by an embodiment of the present invention;
FIG. 6 is a flowchart illustrating an exemplary method for detecting an abnormal emotional state;
FIG. 7 is a graph of the PAD dimension space for six emotion types provided by embodiments of the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention are clearly and completely described below with reference to the drawings in the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
The psychological theory holds that the emotion and intention of the inner heart of a person can be revealed by subconscious limb actions due to the strong correlation between the emotion and the external behavior of the inner heart of the person. Therefore, it is feasible to identify emotional states by emotional attributes that describe the behavior of the population based on psychological models and social emotional principles. From a macroscopic point of view, different morphologies and movement patterns of the population tend to reflect many typical emotional states as a whole.
According to the existing literature data analysis, the emotion recognition method based on the group behaviors is divided from the perspective of a psychological model, and is mainly divided into two types at present: the identification method based on the discrete model and the identification method based on the A-V two-dimensional emotion model. However, both of these methods have some disadvantages.
First, unlike a simple piece of speech or an image, the content presented by surveillance video is very rich. The method has active group movement, complex group emotion and certain plot change. Therefore, the discrete emotion model can only identify a few typical scenes with single shapes and high recognition, and the covered specific emotion types are limited and deficient for dense people. In addition, the group feelings have many subtle features, and are manifested as a combination of multiple emotions. And emotions can change continuously over time. The characteristics of group emotion cannot be effectively expressed by a discrete model.
Second, the A-V two-dimensional emotional model is measured primarily from two dimensions, Arousal and Valence. Wherein, Arousal reflects the intensity of the emotional state, and Valence reflects the type of the emotional state. But the two-dimensional description is still somewhat simpler than the three-dimensional emotional model.
Third, the A-V affective model cannot distinguish some emotions (e.g., anger and fear both belong to the higher-dominance emotions of Arousal), but the PAD three-dimensional affective model can effectively distinguish (anger belongs to the higher-dominance emotion and fear belongs to the lower-dominance emotion).
According to the method, a video data set of group emotion is created from the perspective of group emotion recognition based on a psychological PAD three-dimensional emotion model through data collection and manual marking, and the position relation of six typical emotions in a PAD space is disclosed. An emotion prediction model based on group behaviors is constructed, and group motion characteristics are mapped into three-dimensional coordinates in a PAD space. The method comprises the following steps: motion feature extraction, middle-layer semantic expression, population emotion feature extraction and fusion, and emotion state regression. And constructing an abnormal emotion classifier, and judging that the scene has an abnormal state when two abnormal emotions, namely anger and fear, are detected. The method provided by the application can accurately express the continuous variation state of the group emotion and can also realize effective identification of the global abnormal state.
Referring to fig. 1-7, the invention provides a group emotion recognition and abnormal emotion detection method based on a dimension emotion model, which comprises the following steps:
s1: establishing a PAD three-dimensional emotion model based on group emotion: the method comprises three dimensions of a pleasure degree P, an activation degree A and an dominance degree D, wherein the value of each dimension is between-1 and +1, and a PAD emotion scale is set for reference of emotion dimensions;
s2: establishing a group behavior and group emotion data set: acquiring a standard video data set by a manual marking strategy based on a cognitive psychology principle aiming at video data of different scenes;
s3: and (3) counting a group emotion data set: according to a standard video data set, defining emotion types of videos, marking the videos as videos with the same emotion, normalizing PAD values of the videos to be between [ -1, 1], and determining values of the emotion in PAD space by calculating central points of coordinates;
s4: evaluating the group emotion data set: whether the labeling data are consistent or not is checked, whether the labeling data obey Gaussian distribution or not is verified and analyzed by adopting a Normplot function in a Matlab tool, and if the labeling data are not obey Gaussian distribution, the output image is bent;
s5: group emotion recognition and abnormal emotion detection: extracting group motion characteristics from the video, and expressing layer semantics in group motion;
s6: extracting and regressing the population emotional characteristics: obtaining a regression function by using a Support Vector Regression (SVR) under the support of a training data set by searching an optimal hyperplane on the basis of restraining the minimization of the structured risk;
s7: detection of abnormal emotional states: and taking the PAD value of each marked sample as an input, and training by a Support Vector Machine (SVM).
Wherein P in step S1 represents positive and negative characteristics of individual emotional state, including positive or negative opposite states of emotion, such as like or dislike, satisfaction or dissatisfaction, pleasure or unhappy. If the joy is positive, it represents positive emotion, otherwise it represents negative emotion. In general, different types of group movements may express positive or negative emotions for a dense population. For example, slow walking and stationary conversation all represent positive emotions; while fighting conflicts and running away are typical negative emotions.
A represents the neurophysiologic activation level, alertness, and the degree of activation of body energy associated with emotional states, i.e., the intensity characteristics of emotion, including both low arousal states (e.g., silence) and high arousal states (e.g., surprise). For dense population, the intensity of the movement changes reflects the level of activation. For example, when the free forward movement of the crowd suddenly changes into escape, the general situation shows that the crowd is stimulated and influenced by some external factors, and the activation degree of the crowd is changed from a low awakening state to a high awakening state.
D represents the control state of the individual on the scene and other people, mainly refers to the subjective control degree of the individual on the emotional state, and is used for distinguishing whether the emotional state is generated by the individual subjectively or influenced by the objective environment. For dense populations, the variability and homogeneity of individual movements represent the magnitude of dominance. When the individual movement shows certain autonomy, randomness and disorder, for example, people who walk on squares and streets for leisure have individual behaviors which mainly follow the subjective consciousness of the individual, the dominance degree is higher. When the individual movement shows certain characteristics of people following and converging, for example, when people who escape are evacuated, all people run in a certain direction, the individual movement is limited to a group movement mode, and the group movement mode is macroscopically consistent, so that the dominance degree is low.
Example 1
In the step S2, an emotion labeling system is designed according to the strategy of manual labeling, and the system represents a P-dimensional value by the facial expression of the character model, represents an a-dimensional value by the vibration degree of the heart, and represents a D-dimensional value by the size of the little.
The emotion data set construction method mainly comprises two methods: deduction mode and extraction mode. The deductive approach is to simulate some typical emotional type (joy, panic, sadness) by the performer (preferably with professional performance literacy) through limb movements. The emotion contrast of the method is bright, the expressive force is strong, but the deductive form is different from the real emotion, the requirement on performance literacy of an actor is high, and the method is not universal. The extraction mode is to score the emotional state and the display of the group behaviors and the evaluation of each index by adopting an artificial marking method from the video clips of the real scene. The emotion obtained by the method is natural leakage of people, is closer to real life, but has larger workload of later-stage labeling.
Currently, there is no emotional database of group behaviors in the academic world. Some scholars do similar work in research work, and propose some calibrated data sets, but the data sets are not published, and the validity of the data sets cannot be verified. The main purpose of the prior art is to improve the detection efficiency of group behaviors, not emotion analysis. In view of the data set of the individual posture and the individual behavior, the emotion labeling experiment is developed, so that the data set of the group behavior and the group emotion is established, a plurality of real video data are abstracted according to different scenes, and a standard video data set is obtained through a third-party manual labeling strategy.
The emotion data set constructed by the emotion experiment is derived from a UMN data set, a PETS2009 data set, a UCF data set, an SCU data set, a UCFCrowdBHEHAVE data set and Web
Absolute/normalCrowns Dataset, Violent-flows Dataset Rodriguez' Dataset, video about dense crowd scene, total 50 videos, with 15 frames as unit, several video segments, total 200 video segments, invite 31 volunteers (17 male, 14 female, age between 19-35 years), each video segment is labeled separately, including two aspects of work:
(1) the volunteers scored P, A, D three dimensional values for each video segment. The score of each dimension is from low to high as {1,2,3,4,5} five single options.
(2) The volunteers were to determine the type of emotion each video presented, including seven single options { excited, angry, fear, peaceful, boring, neutral, none of the above }.
Example 2
The method for judging the consistency in step S4 is as follows: calculating a variation coefficient, and counting and evaluating three indexes of a sample mean value mu, a sample standard deviation sigma and a variation coefficient CV of the PAD data, wherein the variation coefficient is defined as:
if the variation coefficient is small, the consistency of the verification marking data is low; otherwise, the consistency of the verification marking data is high.
For the PAD data of different video segments, their mark values in the same dimension are counted. If the coefficient of variation is large, the dispersion degree on the unit mean value is large, which indicates that the consistency and the certainty of the volunteer scoring the group are low; otherwise, it indicates that the volunteer has high consistency and certainty of scoring the group. Generally speaking, for a video with low consistency, if the coefficient of variation is greater than 20%, the data is considered to be possibly abnormal, which indicates that the volunteer has a large divergence, and the data can be considered to be removed from the data set so as to ensure the credibility of the data.
Taking the video segments as examples, the former video segment represents that people are running away, the latter video segment represents that people are fighting violently, and regarding the PAD statistical data of the two video segments, the statistical results show that the variation coefficients CV of the labeled data are concentrated in the [0, 20% ] interval, and it can be considered that the scores of the volunteers are concentrated and have small divergence, so the PAD data of the two video segments are credible.
Example 3
The step S5 of extracting group motion features comprises extraction of foreground regions, extraction of optical flow features, extraction of track features and graphical expression of motion features; the foreground region is extracted by adopting an improved ViBE + algorithm, and the foreground region of the t-th frame is detected to be represented as Rt(ii) a The extraction of the optical flow characteristics adopts a dense optical flow field of GunnerFarnenback to carry out visual expression, and for the t frame image, the optical flow offsets of pixel points (x, y) in the transverse direction and the longitudinal direction are u and v respectively; the extraction of the track features adopts iDT algorithm, carries out dense collection on video pixel points, and judges the position of a tracking point in the next frame through optical flow, thereby forming a tracking track which is expressed as T (p)1,p2…pL) Wherein L is less than or equal to 15; the graphical expression of the motion characteristics adopts three graphical characteristic expression forms of a global motion intensity graph, a global motion directional diagram and a global motion trail graph.
Each track in the global motion track graph is represented by a solid line, and each track comprises three attribute characteristics<T(p1,p2…pL),L,gi>(ii) a Wherein, T (p)1,p2…pL) Representing a number of tracking points p constituting a trajectoryiL represents the length of the track, g ∈ [0,255 ∈ [ ], L represents the length of the track]Representing the gray value of the i-th segment in the track, giIs represented as follows:
wherein i belongs to [1, L-1 ].
The expression of the layer semantics in the group movement of the step S5 is deeply analyzed by adopting a gray level co-occurrence matrix, and the adopted statistics comprise variance, contrast, second moment, entropy, correlation and reciprocal difference moment;
the variance is used for reflecting the gray level change degree of the image, when the variance is larger, the gray level change of the image is larger, and the calculation formula of the variance is as follows:
the contrast is used for measuring the value distribution of the matrix and the local variation in the image and reflecting the definition of the image and the depth of the texture, and a calculation formula of the contrast is as follows:
the second moment is used for measuring the gray change stability of the image texture and reflecting the gray distribution uniformity and texture thickness of the image, and if the value of the second moment is larger, the texture mode with uniform and regular change is indicated, and the calculation formula of the second moment is as follows:
the entropy is used for measuring the randomness of the information content of the image and reflecting the complexity of the gray level distribution of the image, and the calculation formula of the entropy is as follows:
the correlation is used for measuring the similarity of the elements of the space gray level co-occurrence matrix in the row or column direction and reflecting the consistency of image textures, and a calculation formula of the correlation is as follows:
the reciprocal difference moment is used for reflecting the homogeneity of the image texture and measuring the local change of the image texture, if the value is large, the change is absent among different areas of the image texture, the local uniformity is realized, and the calculation formula of the reciprocal difference moment is as follows:
example 4
The step S6 regression function is as follows:
wherein, omega is a weight vector, C is a balance coefficient,ξiin order to be a function of the relaxation variable,for non-linear transformations mapping data to high dimensional space, b is the bias term and ε is the sensitivity;
introducing a lagrange multiplier, equation (10) translates to:
the regression function finally found was:
wherein, k (x, x)i) Is a kernel function;
adopting a radial basis kernel function RBF, and the expression is as follows:
k(xi,xj)=exp(-||xi-xj||2/2σ2) (13)
and obtaining a regression model after training to realize dimension emotion prediction, predicting a continuous value of each video section in the PAD space, and when the group emotion changes along with time, expressing the group emotion as a continuous three-dimensional track so as to present a gradual emotion process.
Example 5
The step S7 of detecting the abnormal emotional state obtains a quadratic equation of the SVM hyperplane, where the expression is:
wherein s.t.wTΦ(xi)≥ρ-ξi,ξi≥0,xiTraining set data representing i ═ {1,2, … N }; w is aTΦ(xi) -0 max decision hyperplane; xiiA relaxation variable that penalizes outliers; v is an element (0, 1)]Is a percentage estimate; phi (-) is a nonlinear equation for mapping training data to a high-dimensional feature space; further, the kernel function is defined as k (x)i,xj)=<Φ(xi),Φ(xj)>Performing point multiplication operation in the feature space, and adopting a Gaussian kernel function, wherein the decision function is defined as:
for the detection result of the population abnormal emotion, the coordinates of six emotion types in the PAD space are determined. The gradual change of the curve from light to dark shows the sequence of the frame sequences. Since the crowd emotion state in the video changes continuously, the crowd emotion state changes continuously. The variation process of the group emotion in the video along with time is represented as a continuous emotion track in the figure. It can be seen that the emotion initially fluctuates around the boring coordinate point, indicating that the population is in a normal state at this time. Then the emotion track suddenly moves to the vicinity of the coordinate point of fear, which shows that the group emotion is converted into abnormity and the change is very sudden. Therefore, from a qualitative point of view, the description of the population emotion by the experiment is consistent with the fact.
The above disclosure is only for a few specific embodiments of the present invention, however, the present invention is not limited to the above embodiments, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present invention.
Claims (8)
1. The method for group emotion recognition and abnormal emotion detection based on the dimension emotion model is characterized by comprising the following steps of:
s1: establishing a PAD three-dimensional emotion model based on group emotion: the method comprises three dimensions of a pleasure degree P, an activation degree A and an dominance degree D, wherein the value of each dimension is between-1 and +1, and a PAD emotion scale is set for reference of emotion dimensions;
s2: establishing a group behavior and group emotion data set: acquiring a standard video data set by a manual marking strategy based on a cognitive psychology principle aiming at video data of different scenes;
s3: and (3) counting a group emotion data set: according to a standard video data set, defining emotion types of videos, marking the videos as videos with the same emotion, normalizing PAD values of the videos to be between [ -1, 1], and determining values of the emotion in PAD space by calculating central points of coordinates;
s4: evaluating the group emotion data set: whether the labeling data are consistent or not is checked, whether the labeling data obey Gaussian distribution or not is verified and analyzed through a Normplot function, and if the labeling data do not obey the Gaussian distribution, the output image is bent;
s5: group emotion recognition and abnormal emotion detection: extracting group motion characteristics from the video, and expressing layer semantics in group motion;
s6: extracting and regressing the population emotional characteristics: obtaining a regression function by using a Support Vector Regression (SVR) under the support of a training data set by searching an optimal hyperplane on the basis of restraining the minimization of the structured risk;
s7: detection of abnormal emotional states: and taking the PAD value of each marked sample as an input, and training by a Support Vector Machine (SVM).
2. The method for group emotion recognition and abnormal emotion detection based on the dimension emotion model as recited in claim 1, wherein in step S2, an emotion labeling system is designed according to a manual labeling strategy, and the system represents a value in the P dimension by facial expression of the character model, a value in the a dimension by vibration degree of the heart, and a value in the D dimension by size of the small person.
3. The method for group emotion recognition and abnormal emotion detection based on dimension emotion model as claimed in claim 1, wherein the method for judging consistency in step S4 is as follows: calculating a variation coefficient, and counting and evaluating three indexes of a sample mean value mu, a sample standard deviation sigma and a variation coefficient CV of the PAD data, wherein the variation coefficient is defined as:
if the variation coefficient is small, the consistency of the verification marking data is low; otherwise, the consistency of the verification marking data is high.
4. The method for group emotion recognition and abnormal emotion detection based on dimension emotion model as recited in claim 1, wherein the step S5 for extracting group motion features includes foreground region extraction, optical flow feature extraction, trajectory feature extraction and graphical representation of motion features; the foreground region is extracted by adopting an improved ViBE + algorithm, and the foreground region of the t-th frame is detected to be represented as Rt(ii) a The extraction of the optical flow characteristics adopts a dense optical flow field of gunner Farnembeck to carry out visual expression, and for the t frame image, pixel points (x,y) the optical flow offsets in the transverse and longitudinal directions are u and v, respectively; the extraction of the track features adopts iDT algorithm, carries out dense collection on video pixel points, and judges the position of a tracking point in the next frame through optical flow, thereby forming a tracking track which is expressed as T (p)1,p2…pL) Wherein L is less than or equal to 15; the graphical expression of the motion characteristics adopts three graphical characteristic expression forms of a global motion intensity graph, a global motion directional diagram and a global motion trail graph.
5. The method for group emotion recognition and abnormal emotion detection based on dimension emotion model of claim 4, wherein each track in the global motion trajectory graph is represented by a solid line, and each track comprises three attribute features<T(p1,p2…pL),L,gi>(ii) a Wherein, T (p)1,p2…pL) Representing a number of tracking points p constituting a trajectoryiL represents the length of the track, g ∈ [0,255 ∈ [ ], L represents the length of the track]Representing the gray value of the i-th segment in the track, giIs represented as follows:
wherein i belongs to [1, L-1 ].
6. The method for group emotion recognition and abnormal emotion detection based on dimension emotion model as recited in claim 5, wherein the expression of level semantics in group motion in step S5 is further analyzed using gray level co-occurrence matrix, and the statistics used include variance, contrast, second moment, entropy, correlation and reciprocal difference moment;
the variance is used for reflecting the gray level change degree of the image, when the variance is larger, the gray level change of the image is larger, and the calculation formula of the variance is as follows:
the contrast is used for measuring the value distribution of the matrix and the local variation in the image and reflecting the definition of the image and the depth of the texture, and a calculation formula of the contrast is as follows:
the second moment is used for measuring the gray change stability of the image texture and reflecting the gray distribution uniformity and texture thickness of the image, and if the value of the second moment is larger, the texture mode with uniform and regular change is indicated, and the calculation formula of the second moment is as follows:
the entropy is used for measuring the randomness of the information content of the image and reflecting the complexity of the gray level distribution of the image, and the calculation formula of the entropy is as follows:
the correlation is used for measuring the similarity of the elements of the space gray level co-occurrence matrix in the row or column direction and reflecting the consistency of image textures, and a calculation formula of the correlation is as follows:
the reciprocal difference moment is used for reflecting the homogeneity of the image texture and measuring the local change of the image texture, if the value is large, the change is absent among different areas of the image texture, the local uniformity is realized, and the calculation formula of the reciprocal difference moment is as follows:
7. the method for group emotion recognition and abnormal emotion detection based on the dimension emotion model as claimed in claim 1, wherein the step S6 regression function is as follows:
wherein, omega is a weight vector, C is a balance coefficient,ξiin order to be a function of the relaxation variable,for non-linear transformations mapping data to high dimensional space, b is the bias term and ε is the sensitivity;
introducing a lagrange multiplier, equation (10) translates to:
the regression function finally found was:
wherein, k (x, x)i) Is a kernel function;
adopting a radial basis kernel function RBF, and the expression is as follows:
k(xi,xj)=exp(-||xi-xj||2/2σ2) (13)
and obtaining a regression model after training to realize dimension emotion prediction, predicting a continuous value of each video section in the PAD space, and when the group emotion changes along with time, expressing the group emotion as a continuous three-dimensional track so as to present a gradual emotion process.
8. The method for group emotion recognition and abnormal emotion detection based on dimension emotion model as claimed in claim 1, wherein the detection of abnormal emotion state in step S7 obtains quadratic equation of SVM hyperplane, and its expression is:
wherein s.t.wTΦ(xi)≥ρ-ξi,ξi≥0,xiTraining set data representing i ═ {1,2, … N }; w is aTΦ(xi) -0 max decision hyperplane; xiiA relaxation variable that penalizes outliers; v is an element (0, 1)]Is a percentage estimate; phi (-) is a nonlinear equation for mapping training data to a high-dimensional feature space; further, the kernel function is defined as k (x)i,xj)=<Φ(xi),Φ(xj)>Performing point multiplication operation in the feature space, and adopting a Gaussian kernel function, wherein the decision function is defined as:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011601643.3A CN112699785B (en) | 2020-12-29 | 2020-12-29 | Group emotion recognition and abnormal emotion detection method based on dimension emotion model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011601643.3A CN112699785B (en) | 2020-12-29 | 2020-12-29 | Group emotion recognition and abnormal emotion detection method based on dimension emotion model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112699785A true CN112699785A (en) | 2021-04-23 |
CN112699785B CN112699785B (en) | 2022-06-07 |
Family
ID=75512114
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011601643.3A Active CN112699785B (en) | 2020-12-29 | 2020-12-29 | Group emotion recognition and abnormal emotion detection method based on dimension emotion model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112699785B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113743271A (en) * | 2021-08-27 | 2021-12-03 | 中国科学院软件研究所 | Video content effectiveness visual analysis method and system based on multi-modal emotion |
CN113822184A (en) * | 2021-09-08 | 2021-12-21 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Expression recognition-based non-feeling emotion abnormity detection method |
CN114677725A (en) * | 2022-03-02 | 2022-06-28 | 郑州大学 | Method and device for predicting and evaluating passive emotion situation of group |
CN115905837A (en) * | 2022-11-17 | 2023-04-04 | 杭州电子科技大学 | Semi-supervised self-adaptive labeling regression electroencephalogram emotion recognition method for automatic abnormality detection |
CN117313723A (en) * | 2023-11-28 | 2023-12-29 | 广州云趣信息科技有限公司 | Semantic analysis method, system and storage medium based on big data |
US20240040165A1 (en) * | 2022-07-29 | 2024-02-01 | Roku, Inc. | Emotion Evaluation of Contents |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104732203A (en) * | 2015-03-05 | 2015-06-24 | 中国科学院软件研究所 | Emotion recognizing and tracking method based on video information |
US20160287166A1 (en) * | 2015-04-03 | 2016-10-06 | Bao Tran | Personal monitoring system |
CN107169426A (en) * | 2017-04-27 | 2017-09-15 | 广东工业大学 | A kind of detection of crowd's abnormal feeling and localization method based on deep neural network |
CN107220591A (en) * | 2017-04-28 | 2017-09-29 | 哈尔滨工业大学深圳研究生院 | Multi-modal intelligent mood sensing system |
US10061977B1 (en) * | 2015-04-20 | 2018-08-28 | Snap Inc. | Determining a mood for a group |
CN110826466A (en) * | 2019-10-31 | 2020-02-21 | 南京励智心理大数据产业研究院有限公司 | Emotion identification method, device and storage medium based on LSTM audio-video fusion |
CN111353366A (en) * | 2019-08-19 | 2020-06-30 | 深圳市鸿合创新信息技术有限责任公司 | Emotion detection method and device and electronic equipment |
CN111368649A (en) * | 2020-02-17 | 2020-07-03 | 杭州电子科技大学 | Emotion perception method operating in raspberry pie |
CN111914594A (en) * | 2019-05-08 | 2020-11-10 | 四川大学 | Group emotion recognition method based on motion characteristics |
-
2020
- 2020-12-29 CN CN202011601643.3A patent/CN112699785B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104732203A (en) * | 2015-03-05 | 2015-06-24 | 中国科学院软件研究所 | Emotion recognizing and tracking method based on video information |
US20160287166A1 (en) * | 2015-04-03 | 2016-10-06 | Bao Tran | Personal monitoring system |
US10061977B1 (en) * | 2015-04-20 | 2018-08-28 | Snap Inc. | Determining a mood for a group |
CN107169426A (en) * | 2017-04-27 | 2017-09-15 | 广东工业大学 | A kind of detection of crowd's abnormal feeling and localization method based on deep neural network |
CN107220591A (en) * | 2017-04-28 | 2017-09-29 | 哈尔滨工业大学深圳研究生院 | Multi-modal intelligent mood sensing system |
CN111914594A (en) * | 2019-05-08 | 2020-11-10 | 四川大学 | Group emotion recognition method based on motion characteristics |
CN111353366A (en) * | 2019-08-19 | 2020-06-30 | 深圳市鸿合创新信息技术有限责任公司 | Emotion detection method and device and electronic equipment |
CN110826466A (en) * | 2019-10-31 | 2020-02-21 | 南京励智心理大数据产业研究院有限公司 | Emotion identification method, device and storage medium based on LSTM audio-video fusion |
CN111368649A (en) * | 2020-02-17 | 2020-07-03 | 杭州电子科技大学 | Emotion perception method operating in raspberry pie |
Non-Patent Citations (5)
Title |
---|
REDA ELBAROUGY等: "Continuous Audiovisual Emotion Recognition Using Feature Selection and LSTM", 《JOURNAL OF SIGNAL PROCESSING》 * |
SHIZHE CHEN等: "Multimodal Multi-task Learning for Dimensional and Continuous Emotion Recognition", 《AVEC"17:PROCEEDINGS OF THE 7TH ANNUAL WORKSHOP ON AUDIO/VISUAL EMOTION CHALLENGE》 * |
张严浩: "基于结构化认知计算的群体行为分析", 《中国博士学位论文全文数据库 (信息科技辑)》 * |
张婷: "基于PAD三维情感模型的情感语音研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
饶元等: "基于语义分析的情感计算技术研究进展", 《软件学报》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113743271A (en) * | 2021-08-27 | 2021-12-03 | 中国科学院软件研究所 | Video content effectiveness visual analysis method and system based on multi-modal emotion |
CN113743271B (en) * | 2021-08-27 | 2023-08-01 | 中国科学院软件研究所 | Video content effectiveness visual analysis method and system based on multi-modal emotion |
CN113822184A (en) * | 2021-09-08 | 2021-12-21 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Expression recognition-based non-feeling emotion abnormity detection method |
CN114677725A (en) * | 2022-03-02 | 2022-06-28 | 郑州大学 | Method and device for predicting and evaluating passive emotion situation of group |
CN114677725B (en) * | 2022-03-02 | 2024-09-24 | 郑州大学 | Method and device for predicting and evaluating passive emotion situation of population |
US20240040165A1 (en) * | 2022-07-29 | 2024-02-01 | Roku, Inc. | Emotion Evaluation of Contents |
US11930226B2 (en) * | 2022-07-29 | 2024-03-12 | Roku, Inc. | Emotion evaluation of contents |
CN115905837A (en) * | 2022-11-17 | 2023-04-04 | 杭州电子科技大学 | Semi-supervised self-adaptive labeling regression electroencephalogram emotion recognition method for automatic abnormality detection |
CN117313723A (en) * | 2023-11-28 | 2023-12-29 | 广州云趣信息科技有限公司 | Semantic analysis method, system and storage medium based on big data |
CN117313723B (en) * | 2023-11-28 | 2024-02-20 | 广州云趣信息科技有限公司 | Semantic analysis method, system and storage medium based on big data |
Also Published As
Publication number | Publication date |
---|---|
CN112699785B (en) | 2022-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112699785B (en) | Group emotion recognition and abnormal emotion detection method based on dimension emotion model | |
Lotfi et al. | Supporting independent living for older adults; employing a visual based fall detection through analysing the motion and shape of the human body | |
Chen et al. | Analyze spontaneous gestures for emotional stress state recognition: A micro-gesture dataset and analysis with deep learning | |
Arunnehru et al. | Automatic human emotion recognition in surveillance video | |
Zhang et al. | Exploring coherent motion patterns via structured trajectory learning for crowd mood modeling | |
Zhang et al. | Crowd emotion evaluation based on fuzzy inference of arousal and valence | |
Wang et al. | Unlocking the emotional world of visual media: An overview of the science, research, and impact of understanding emotion | |
Zhang et al. | Physiognomy: Personality traits prediction by learning | |
Kulkarni et al. | Facial expression (mood) recognition from facial images using committee neural networks | |
Benalcázar et al. | Real-time hand gesture recognition based on artificial feed-forward neural networks and EMG | |
Butt et al. | Fall detection using LSTM and transfer learning | |
CN112801009B (en) | Facial emotion recognition method, device, medium and equipment based on double-flow network | |
CN108717548B (en) | Behavior recognition model updating method and system for dynamic increase of sensors | |
Gomes et al. | Establishing the relationship between personality traits and stress in an intelligent environment | |
Doshi et al. | From deep learning to episodic memories: Creating categories of visual experiences | |
Naidu et al. | Stress recognition using facial landmarks and CNN (Alexnet) | |
Alsaedi | New Approach of Estimating Sarcasm based on the percentage of happiness of facial Expression using Fuzzy Inference System | |
CN110084152B (en) | Disguised face detection method based on micro-expression recognition | |
TWI646438B (en) | Emotion detection system and method | |
Iqbal et al. | Facial emotion recognition using geometrical features based deep learning techniques | |
Yashaswini et al. | Stress detection using deep learning and IoT | |
Adibuzzaman et al. | In situ affect detection in mobile devices: a multimodal approach for advertisement using social network | |
Hwooi et al. | Monitoring application-driven continuous affect recognition from video frames | |
Parmar et al. | Human activity recognition system | |
Adibuzzaman et al. | Towards in situ affect detection in mobile devices: A multimodal approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |