CN111680550B - Emotion information identification method and device, storage medium and computer equipment - Google Patents
Emotion information identification method and device, storage medium and computer equipment Download PDFInfo
- Publication number
- CN111680550B CN111680550B CN202010349534.0A CN202010349534A CN111680550B CN 111680550 B CN111680550 B CN 111680550B CN 202010349534 A CN202010349534 A CN 202010349534A CN 111680550 B CN111680550 B CN 111680550B
- Authority
- CN
- China
- Prior art keywords
- emotion
- gesture
- information
- human body
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 274
- 238000000034 method Methods 0.000 title claims abstract description 61
- 239000011159 matrix material Substances 0.000 claims abstract description 84
- 238000012545 processing Methods 0.000 claims abstract description 52
- 239000013598 vector Substances 0.000 claims abstract description 43
- 230000008921 facial expression Effects 0.000 claims abstract description 31
- 238000006243 chemical reaction Methods 0.000 claims abstract description 20
- 238000004891 communication Methods 0.000 claims description 17
- 230000003068 static effect Effects 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 9
- 230000001815 facial effect Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 17
- 238000005516 engineering process Methods 0.000 abstract description 10
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 230000009471 action Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 210000004709 eyebrow Anatomy 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002996 emotional effect Effects 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 206010049976 Impatience Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 210000001145 finger joint Anatomy 0.000 description 1
- 230000005057 finger movement Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- QSHDDOUJBYECFT-UHFFFAOYSA-N mercury Chemical compound [Hg] QSHDDOUJBYECFT-UHFFFAOYSA-N 0.000 description 1
- 229910052753 mercury Inorganic materials 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000007665 sagging Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an emotion information identification method, an emotion information identification device, a storage medium and computer equipment, relates to the technical field of artificial intelligence, and aims to respectively process an acquired human body gesture and facial expression into gesture matrix and emotion vector information, and simultaneously process the gesture matrix and the vector information through a pre-trained emotion intensity identification model to obtain emotion intensity data so as to correspondingly identify emotion types, thereby improving accuracy and efficiency of emotion information identification. The method comprises the following steps: receiving an emotion information identification request, wherein the emotion information identification request carries human body posture information; converting the human body posture information into a posture matrix containing posture feature points by using a preset posture conversion algorithm; processing the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data; and searching and feeding back the corresponding emotion type according to the emotion intensity data. In addition, the invention also relates to a blockchain technology, and emotion intensity data can be stored in the blockchain.
Description
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to a method and apparatus for identifying emotion information, a storage medium, and a computer device.
Background
With the development of big data, the ability of having the robot possess social and service ability and having the ability of reading people's emotion intensity and fluctuation in real time in man-machine interaction's process becomes people's wish and demand more and more. In the actual business handling process, if the robot can timely detect emotion fluctuation of the person and can carry out adaptive adjustment according to the response of the person, the dissatisfaction of the person can be relieved, and the performance of the robot can also obtain higher acceptance of the user.
At present, the traditional emotion information recognition technology only stays at the emotion of a person inferred through facial expressions, however, the emotion information recognition method ignores the role played by the limb language naturally made by the person in social occasions in emotion intensity, so that the accuracy of emotion information recognition is low, and the emotion information recognition efficiency is low.
Disclosure of Invention
In view of this, the invention provides a method, a device, a storage medium and a computer device for emotion information recognition, which are mainly aimed at respectively processing the acquired human body gesture and facial expression into gesture matrix and emotion vector information, and simultaneously processing the gesture matrix and the vector information through a pre-trained emotion intensity recognition model to obtain emotion intensity data, so as to correspondingly recognize emotion types, thereby improving accuracy and efficiency of emotion information recognition through dual dimensions of the human body gesture and facial expression. In addition, the invention stores data by using the blockchain technology, so that the safety of emotion information can be improved.
According to one aspect of the present invention, there is provided an emotion information identification method including:
receiving an emotion information identification request, wherein the emotion information identification request carries human body posture information;
converting the human body posture information into a posture matrix containing posture feature points by using a preset posture conversion algorithm;
processing the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data;
and searching and feeding back the corresponding emotion type according to the emotion intensity data.
Further, the processing the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data includes:
and processing the gesture matrix and the acquired emotion vector information by using a pre-trained emotion intensity model to obtain emotion intensity data.
Further, the processing the gesture matrix and the acquired emotion vector information by using the pre-trained emotion intensity model to obtain emotion intensity data includes:
And processing the input gesture matrix and emotion vector information simultaneously by using a sigmoid function, and outputting obtained emotion intensity data, wherein the emotion intensity data is stored in a blockchain.
Further, the converting the human body posture information into a posture matrix containing posture feature points by using a preset posture conversion algorithm includes:
Acquiring Euler angle parameters of each characteristic point;
And determining an attitude matrix based on each characteristic point under a human body static model coordinate system according to the Euler angle parameters.
Further, before the processing is performed on the gesture matrix and the acquired emotion vector information by using the pre-trained emotion intensity model to obtain emotion intensity data, the method further includes:
And carrying out recognition processing on the obtained facial expression information by using a preset facial recognition algorithm to obtain corresponding emotion vector information.
Further, before the human body posture information is converted into the posture matrix containing the posture feature points by using a preset posture conversion algorithm, the method further comprises:
according to the human body posture information, establishing a homogeneous transformation matrix of human body joint points based on a human body static model coordinate system;
and determining the coordinates of each joint point in a matrix multiplication mode, and determining the joint point as a characteristic point of the human body posture.
Further, before the processing is performed on the gesture matrix and the acquired emotion proper amount information by using the pre-trained emotion intensity model to obtain emotion intensity data, the method further includes:
Training an emotion intensity model according to the RNN-LSTM model, the sample gesture data and the preset emotion gesture label.
According to a second aspect of the present invention, there is provided an emotion information recognition device including:
The receiving unit is used for receiving an emotion information identification request, wherein the emotion information identification request carries human body posture information;
The conversion unit is used for converting the human body posture information into a posture matrix containing posture feature points by using a preset posture conversion algorithm;
the processing unit is used for processing the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data;
and the feedback unit is used for searching and feeding back the corresponding emotion type according to the emotion intensity data.
Further, the processing unit includes:
and the processing module is used for processing the gesture matrix and the acquired emotion vector information by utilizing a pre-trained emotion intensity model to obtain emotion intensity.
Further, the processing module is specifically configured to process the input gesture matrix and emotion vector information simultaneously by using a sigmoid function, and output obtained emotion intensity data, where the emotion intensity data is stored in a blockchain.
Further, the conversion unit includes:
the acquisition module is used for acquiring Euler angle parameters of each characteristic point;
and the determining module is used for determining an attitude matrix based on each characteristic point under the human body static model coordinate system according to the Euler angle parameters.
Further, the apparatus further comprises:
The identification unit is used for carrying out identification processing on the obtained facial expression information by utilizing a preset facial identification algorithm to obtain corresponding emotion vector information.
Further, the apparatus further comprises:
the establishing unit is used for establishing a homogeneous transformation matrix of human joint points based on a human static model coordinate system according to the human posture information;
and the determining unit is used for determining the coordinates of each joint point in a matrix multiplication mode and determining the joint point as a characteristic point of the human body gesture.
Further, the apparatus further comprises:
the training unit is used for training the emotion intensity model according to the RNN-LSTM model, the sample gesture data and the preset emotion gesture label.
According to a third aspect of the present invention, there is provided a storage medium having stored therein at least one executable instruction for causing a processor to perform the steps of: receiving an emotion information identification request, wherein the emotion information identification request carries human body posture information; converting the human body posture information into a posture matrix containing posture feature points by using a preset posture conversion algorithm; processing the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data; and searching and feeding back the corresponding emotion type according to the emotion intensity data.
According to a fourth aspect of the present invention there is provided a computer device comprising a processor, a memory, a communications interface and a communications bus, said processor, said memory and said communications interface completing communications with each other via said communications bus, said memory for storing at least one executable instruction, said executable instruction causing said processor to perform the steps of: receiving an emotion information identification request, wherein the emotion information identification request carries human body posture information; converting the human body posture information into a posture matrix containing posture feature points by using a preset posture conversion algorithm; processing the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data; and searching and feeding back the corresponding emotion type according to the emotion intensity data.
Compared with the prior art that the emotion of a person is inferred only through facial expressions, the emotion information identification method, device, storage medium and computer equipment are used for receiving emotion information identification requests, wherein the emotion information identification requests carry human body posture information; converting the human body posture information into a posture matrix containing posture feature points by using a preset posture conversion algorithm; processing the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data; and searching and feeding back the corresponding emotion type according to the emotion intensity data. Therefore, the accuracy and efficiency of emotion information identification can be improved through the double dimensions of the human body posture and the facial expression. In addition, the invention stores data by using the blockchain technology, so that the safety of emotion information can be improved.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 shows a flowchart of an emotion information identification method provided by an embodiment of the present invention;
Fig. 2 shows a schematic diagram of human body feature points based on euler angles according to an embodiment of the present invention;
fig. 3 shows a schematic structural diagram of an emotion information identification device according to an embodiment of the present invention;
fig. 4 shows a schematic physical structure of a computer device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As described in the background art, at present, the conventional emotion information recognition technology only remains to infer the emotion of a person through facial expressions, however, the emotion information recognition method ignores the role played by the limb language naturally made by the person in the social occasion in emotion intensity, so that the accuracy of emotion information recognition is not high, and the efficiency of emotion information recognition is relatively low.
In order to solve the above problem, an embodiment of the present invention provides a method for identifying emotion information, as shown in fig. 1, the method includes:
101. and receiving an emotion information identification request, wherein the emotion information identification request carries the human body posture information.
The emotion information identification request may specifically be sent by a server. In an actual application scene, an image or an image with human body posture information can be acquired through a camera arranged in the robot, and the human body posture information can be used for emotion analysis, so that the robot can take different measures for a user according to the obtained emotion type, such as business handling in advance, or window replacement and the like, so that business handling efficiency is improved.
102. And converting the human body posture information into a posture matrix containing posture feature points by using a preset posture conversion algorithm.
The gesture conversion algorithm can specifically be a method for expressing the rotational freedom degree of each joint point of the human body through Euler angles. The obtained human body posture information can be processed through a preset algorithm to obtain a matrix containing 13 feature points, and the rotation freedom degree of each feature point is represented by using Euler angles due to the fact that the human body joints have angular rotation motion, so that the coordinate positions and the rotation freedom degrees of each feature point of a human body in each frame of image can be obtained through the human body posture information. The human body gesture can be abstracted into a gesture matrix represented by 13 feature points through the step 102, and the matrix can be analyzed later, so that emotion types corresponding to different gesture matrixes can be obtained.
It should be noted that, the number of feature points corresponding to the embodiment of the present invention may be set according to the requirement of the service type, for example, if the accuracy requirement is higher, the finger joints may be also abstracted as feature points.
103. And processing the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data.
The emotion intensity algorithm may specifically include processing the gesture matrix through a pre-trained emotion intensity model, and for the embodiment of the present invention, the emotion intensity model may specifically be a double-layer LSTM-RNN structure. As the emotion information is identified only through one dimension of face identification in the prior art, identification errors are easy to cause, and the identification accuracy is low, the embodiment of the invention is innovated on the basis, and the two dimensions of face identification and human body posture can be combined to identify emotion types by adopting a double-layer LSTM-RNN structure, so that the accuracy of emotion information identification is greatly improved. Specifically, the gesture matrix is processed according to a preset emotion intensity algorithm, so that corresponding emotion intensity data can be obtained, and the emotion intensity data can be used for searching emotion types correspondingly, so that a corresponding processing method is adopted.
104. And searching and feeding back the corresponding emotion type according to the emotion intensity data.
After the emotion intensity data is obtained, the corresponding emotion type is searched locally, and the emotion type is utilized to respond to the emotion information identification request. For example, if the emotion intensity is 1, the emotion type corresponding to the emotion intensity 1 can be found locally to be anger, and after the emotion type of anger is fed back, the robot can be controlled to take measures for transacting business in advance for the user. For the embodiment of the invention, the corresponding relation between the emotion intensity data and the emotion type can be pre-established, and the emotion intensity data, the emotion type and the corresponding relation between the emotion intensity data and the emotion type are stored locally, so that different measures can be taken for pacifying. In an actual application scene, for example, when a bank front office processes business, the phenomenon of waiting and queuing of a user often occurs, during which the situation of waiting for impatience, queuing for everything and the need of help during self-service business processing may occur, and in the process of estimating emotion intensity through human body gestures, the user can be calmed according to the expression and action gestures of the user, and the user can be assisted in business processing.
Further, in order to better illustrate the process of the emotion information identification method, as a refinement and extension of the above embodiment, several alternative embodiments are provided in the embodiment of the present invention, but not limited thereto, and specifically shown as follows:
in an alternative embodiment of the present invention, the step 103 may specifically include: and processing the gesture matrix and the acquired emotion vector information by using a pre-trained emotion intensity model to obtain emotion intensity.
The pre-trained emotion intensity model can be an LSTM-RNN structure, and the LSTM-RNN structure can capture differential flow of millimeter level, so that the structure is very suitable for processing human body gesture sequences with high complexity, length and internal association degree. For the embodiment of the invention, a double-layer LSTM-RNN structure can be adopted, the structure input variable can be a variable-length action series { X 1,X2,X3,......Xn-1,Xn }, the variable-length action series can be specifically a human body posture image, and X n can be any frame of human body posture image in the human body posture image. In addition, for the embodiment of the present invention, an emotion vector corresponding to a facial expression may be used as another parameter to input, where the emotion vector may be used to represent an emotion type of an actual person, specifically, each facial expression of a human body in a human body posture image may correspond to a different emotion type, where the emotion type may be represented by a value between 0 and 1, for example, 1 represents anger, 2 represents surprise, etc., and the human body posture image and the emotion vector are simultaneously input into a pre-trained emotion intensity model for processing, so as to obtain emotion intensity data. For example, for a human body posture of slightly opening both arms, if the recognized facial expression is normal, the emotional intensity of calm can be obtained; but surprise this emotional intensity can be obtained if the identified facial expression is a glaring eye. That is, by combining emotion types with human body gestures, the accuracy of recognition can be improved, so that correct feedback is made. For example, if a person's emotion is dissatisfied, he may be frowned, and if irritated by improper activity, he may express a strong dissatisfaction in the manner of shrugging. However, if the emotion fluctuation can be timely perceived and adaptively adjusted according to the response of the person, the dissatisfaction of the person can be relieved, and the user can be accepted more highly. For another example, the action of slightly opening the arms is just a normal posture when a person calms down to express the idea; but if his expression is surprised, this is much higher than the surprised degree of natural sagging of the arms. Therefore, in order to enable the robot to more fully analyze the body posture actions of the person, emotion types may be included in the emotion intensity estimation process.
For the embodiment of the present invention, the processing the gesture matrix and the acquired emotion vector information by using the pre-trained emotion intensity model may further specifically include: and processing the input gesture matrix and emotion vector information simultaneously by using a sigmoid function, and outputting the obtained emotion intensity data, wherein the emotion intensity data is stored in a blockchain.
Specifically, the gesture matrix and the emotion vector information extracted from the human gesture image are simultaneously input into a pre-trained emotion intensity model, a sigmoid function in the model can be called for processing, the sigmoid function can be used for converting any real number into a certain number between 0 and 1 as probability, for example, after the sigmoid function processes the gesture matrix and the emotion vector information, the probability of different emotion intensity data can be obtained, such as anger 93%, less than 5%, open heart 1% and excited 1%, and the emotion intensity data with the highest probability can be output.
It should be emphasized that, to further ensure the privacy and security of the emotion intensity data, the emotion intensity data may also be stored in a blockchain node.
In order to ensure the privacy and safety of the emotion intensity data, the emotion intensity data can be stored in nodes of a blockchain. Specifically, a blockchain network may be pre-established, and emotion intensity data is recorded by using a recording node in the blockchain network, and the emotion intensity data is packaged and stored in a new block, and the generated management key is stored in the recording node, so as to be conveniently retrieved and fed back when needed. According to the embodiment of the invention, the emotion intensity data is stored by the blockchain technology, so that the safety of the emotion intensity data can be greatly ensured, the data can be easily called, and the emotion recognition efficiency can be improved.
In another alternative embodiment of the present invention, the step 102 may specifically include: acquiring Euler angle parameters of each characteristic point; and determining an attitude matrix based on each characteristic point under a human body static model coordinate system according to the Euler angle parameters.
The characteristics of skeleton length are removed, and only the characteristic of rotational freedom degree of the joint point is reserved, so that the characteristic can be represented by Euler angles, the Euler angles can be used for determining 3 independent angle parameters of the fixed-point rotational rigid body position, for example, the rotational freedom degree of a joint point can be represented by using three angle parameters of r-p-y as a group.
The standardization of different human body characteristic data is realized by inputting a predefined human body skeleton model, each characteristic point in the predefined model has a predefined coordinate system, the position of each joint on a human body can be determined by utilizing a key point extraction technology in OpenPose, and OpenPose human body gesture recognition is an open source library developed by the university of Carnikenyl Mercury (CMU) in the United states based on a convolutional neural network and supervised learning and taking caffe as a framework. The gesture estimation of human body actions, facial expressions, finger movements and the like can be realized. Is suitable for single person and multiple persons, and has excellent robustness. Specifically, the input is an image, the basic model may be VGG19, the model output is a matrix that can be represented by the above-mentioned gesture, and for the embodiment of the present invention, only information of the key points needs to be extracted as the model input to be fed into the network of the double-layer LSTM-RNN. At the same time, openCV also provides a call interface to openpose open source framework, and can calculate the key point information in this way. And extracting point clouds near the extracted key points, estimating the orientations of the key points relative to the same key points of the predefined model, and representing the points by Euler angles through the transformation relation of each joint relative to the predefined joint points. In this way, the pose of the person at time i (frame) can be represented by the following matrix X i:
wherein r, p and y in the matrix can respectively represent Euler angle parameters of each characteristic point.
In yet another alternative embodiment of the present invention, the method may further comprise: and carrying out recognition processing on the obtained facial expression information by using a preset facial recognition algorithm to obtain corresponding emotion vector information.
The specific process of the face recognition algorithm may include: reading a facial expression image, estimating the approximate position of facial features by taking the top of the head as a datum point, and uniformly setting mark points on the contours of all feature parts of the face; dividing the human face into two parts which are symmetrical left and right through the central axis fitted by the central points of the connecting lines of the eyebrows and the pupils and the central point of the mouth, adjusting the image to the same horizontal line under the conditions of no scaling, no translation and no rotation, and establishing a facial expression shape model; dividing the left/right eyebrows and the mouth into different regions according to the left/right eyes in the facial expression shape model, and defining these regions as feature candidate regions; and extracting feature vectors from each feature candidate region by adopting a differential image method, and extracting facial expression feature vectors from the image sequence with the largest differential value mean value in each feature candidate region by carrying out differential operation on all image sequences in the image processed in the previous step and images with neutral expressions in a database.
And after the facial expression feature vector is obtained, the emotion vector data corresponding to the facial expression feature vector is retrieved locally. The emotion vector data may represent emotion types expressed by facial expressions, for example, facial expressions such as eyebrow tattooing may correspond to an emotion vector dissatisfaction.
In yet another alternative embodiment of the present invention, the method may further comprise: according to the human body posture information, establishing a homogeneous transformation matrix of human body joint points based on a human body static model coordinate system; and determining the coordinates of each joint point in a matrix multiplication mode, and determining the joint point as a characteristic point of the human body posture.
As shown in fig. 2, the G coordinate system may be a human body static model coordinate system, and the method for representing the coordinate system may be specifically described as follows: firstly, extracting a 'skeleton tree' containing 13 feature points from an acquired gesture image of a person, wherein the relative relation between the feature points in the skeleton tree is used as a predefined model for static storage; secondly, because the body structures of each person are the same and the skeletons are different in length, a homogeneous transformation matrix T can be introduced to represent rigid transformation of different individuals relative to corresponding points on the static model, and the position of any point can be obtained through matrix multiplication.
In yet another alternative embodiment of the present invention, the method may further comprise: training an emotion intensity model according to the RNN-LSTM model, the sample gesture data and the preset emotion gesture label.
Specifically, before model training, the data set needs to be pre-trained by making the data set or adopting the data set on the network, for example, for a certain gesture, the actual emotion of the observed person needs to be determined, and data labeling is completed. The specific process of model training may include: because many data in real life have both temporal and spatial characteristics, such as motion trajectories of human bodies, videos of successive frames, and so on, the human body gestures also correspond to a set of data at each time point, and the data often have certain spatial characteristics. Therefore, to perform such time series as classification, prediction, etc., it is necessary to model and extract features in time and space. A common time modeling tool is the Recurrent Neural Network (RNN) correlation model (LSTM), which has a strong extraction capability for time series features due to its unique gate structure design, and thus is widely used for prediction problems and has achieved good results. The conventional LSTM structure includes three structures of an input gate (input gate), an output gate (output gate), a forget gate (form gate), and a neural node (cell), where the input may be a gesture representation of a current frame of a human body at time t, and the output may be a gesture descriptor for describing a type of the current gesture. For the embodiment of the invention, the n LSTM structures are transversely connected to form a double-layer LSTM-RNN structure, and because a continuous image stream is often required when the human body posture is determined, the video stream is represented by { X 1,X2,X3,......Xn-1,Xn } and is used as the input of a model.
It should be noted that, training the model requires the data set to perform pre-training, and then performing secondary training by making the data set to achieve the effect of robustness. For example, a current gesture image stream of a person is recorded through a camera, a gesture description matrix is obtained through a key point extraction mode of OpenPose, the emotion type intensity of the current person is inquired to finish data marking, and further training is conducted. For the embodiment of the invention, the LSTM unit structure is still adopted, but the double-layer LSTM structure and a full connection layer are adopted, and the double-layer structure can increase the correlation detection of time sequence.
Compared with the prior art that the emotion of a person is inferred only through facial expressions, the emotion information identification method provided by the invention has the advantages that emotion information identification requests are received, and the emotion information identification requests carry human body posture information; converting the human body posture information into a posture matrix containing posture feature points by using a preset posture conversion algorithm; processing the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data; and searching and feeding back the corresponding emotion type according to the emotion intensity data. Therefore, the accuracy and efficiency of emotion information identification can be improved through the double dimensions of the human body posture and the facial expression. In addition, the invention stores data by using the blockchain technology, so that the safety of emotion information can be improved.
Further, as a specific implementation of fig. 1, an embodiment of the present invention provides an emotion information identification device, as shown in fig. 3, where the device includes: a receiving unit 21, a converting unit 22, a processing unit 23 and a feedback unit 24.
A receiving unit 21, configured to receive an emotion information identification request, where the emotion information identification request carries human body posture information;
a conversion unit 22, configured to convert the human body posture information into a posture matrix including posture feature points by using a preset posture conversion algorithm;
The processing unit 23 may be configured to process the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data;
and the feedback unit 24 is used for searching and feeding back the corresponding emotion type according to the emotion intensity data.
Further, the processing unit 23 includes:
the processing module 231 may be configured to process the gesture matrix and the obtained emotion vector information by using a pre-trained emotion intensity model, so as to obtain emotion intensity.
Further, the processing module 231 may specifically be configured to process the input gesture matrix and emotion vector information simultaneously by using a sigmoid function, and output the obtained emotion intensity data.
Further, the processing module 231 may be further configured to store the emotion intensity data using a blockchain technique.
Further, the conversion unit 22 includes:
an obtaining module 221, configured to obtain euler angle parameters of each feature point;
The determining module 222 may be configured to determine, according to the euler angle parameter, a pose matrix based on each feature point in the coordinate system of the static model of the human body.
Further, the apparatus further comprises:
The identifying unit 25 may be configured to identify the obtained facial expression information by using a preset facial recognition algorithm, so as to obtain corresponding emotion vector information.
Further, the apparatus further comprises:
A building unit 26, configured to build a homogeneous transformation matrix of human joint points based on a human static model coordinate system according to the human posture information;
The determining unit 27 may be configured to determine coordinates of each joint point by means of matrix multiplication, and determine the joint point as a feature point of the human body posture.
Further, the apparatus further comprises:
The training unit 28 may be configured to train the emotion intensity model according to the RNN-LSTM model, the sample gesture data, and the preset emotion gesture label.
Based on the above method as shown in fig. 1, correspondingly, an embodiment of the present invention further provides a storage medium, where at least one executable instruction is stored in the storage medium, and the executable instruction causes a processor to execute the following steps: receiving an emotion information identification request, wherein the emotion information identification request carries human body posture information; converting the human body posture information into a posture matrix containing posture feature points by using a preset posture conversion algorithm; processing the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data; and searching and feeding back the corresponding emotion type according to the emotion intensity data.
Based on the above embodiments of the method shown in fig. 1 and the apparatus shown in fig. 3, the embodiment of the present invention further provides a computer device, as shown in fig. 4, including a processor (processor) 31, a communication interface (Communications Interface) 32, a memory (memory) 33, and a communication bus 34. Wherein: the processor 31, the communication interface 32, and the memory 33 perform communication with each other via the communication bus 34. A communication interface 34 for communicating with other devices such as network elements of a user terminal or other server or the like. The processor 31 is configured to execute a program, and may specifically perform the relevant steps in the above-described emotion information identification method embodiment. In particular, the program may include program code including computer-operating instructions. The processor 31 may be a central processing unit CPU, or an Application-specific integrated Circuit ASIC (Application SPECIFIC INTEGRATED Circuit), or one or more integrated circuits configured to implement embodiments of the present invention.
The one or more processors included in the terminal may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs. And a memory 33 for storing a program. The memory 33 may comprise a high-speed RAM memory or may further comprise a non-volatile memory (non-volatile memory), such as at least one disk memory. The program may be specifically for causing the processor 31 to: receiving an emotion information identification request, wherein the emotion information identification request carries human body posture information; converting the human body posture information into a posture matrix containing posture feature points by using a preset posture conversion algorithm; processing the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data; and searching and feeding back the corresponding emotion type according to the emotion intensity data.
According to the technical scheme, the emotion information identification request can be received, and the emotion information identification request carries human body posture information; converting the human body posture information into a posture matrix containing posture feature points by using a preset posture conversion algorithm; processing the gesture matrix according to a preset emotion intensity algorithm to obtain emotion intensity data; and searching and feeding back the corresponding emotion type according to the emotion intensity data. Therefore, the accuracy and the efficiency of emotion information identification can be improved through the double dimensions of the human body gesture and the facial expression. In addition, the invention stores data by using the blockchain technology, so that the safety of emotion information can be improved.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
It should be noted that, other corresponding descriptions of each functional module related to the emotion information identification device provided by the embodiment of the present invention may refer to corresponding descriptions of the method shown in fig. 1, and are not repeated herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the methods and apparatus described above may be referenced to one another. In addition, the "first", "second", and the like in the above embodiments are for distinguishing the embodiments, and do not represent the merits and merits of the embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in accordance with embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
Claims (5)
1. An emotion information recognition method, characterized by comprising:
receiving an emotion information identification request, wherein the emotion information identification request carries human body posture information;
According to the human body posture information, establishing a homogeneous transformation matrix of human body joint points based on a human body static model coordinate system; determining the coordinates of each joint point in a matrix multiplication mode, and determining the joint point as a characteristic point of the human body posture;
acquiring Euler angle parameters of each characteristic point; determining an attitude matrix based on each characteristic point under a human body static model coordinate system according to the Euler angle parameters;
Processing the gesture matrix and the acquired emotion vector information by using a pre-trained emotion intensity model to obtain emotion intensity data, wherein the emotion intensity model is of a double-layer LSTM-RNN structure, the double-layer LSTM-RNN structure is formed by transversely connecting n LSTM structures, the gesture matrix comprises n gesture matrixes, each gesture matrix corresponds to a frame of human body gesture image, and the emotion vector information is obtained by searching based on facial expression feature vectors;
Searching and feeding back the corresponding emotion type according to the emotion intensity data;
the method further comprises the steps of:
Carrying out recognition processing on the obtained facial expression information by using a preset facial recognition algorithm to obtain corresponding emotion vector information;
the method further comprises the steps of:
Training an emotion intensity model according to the RNN-LSTM model, the sample gesture data and the preset emotion gesture label.
2. The method of claim 1, wherein the processing the gesture matrix and the obtained emotion vector information using a pre-trained emotion intensity model to obtain emotion intensity data comprises:
And processing the input gesture matrix and emotion vector information simultaneously by using a sigmoid function, and outputting obtained emotion intensity data, wherein the emotion intensity data is stored in a blockchain.
3. An emotion information recognition device, comprising:
The receiving unit is used for receiving an emotion information identification request, wherein the emotion information identification request carries human body posture information;
the establishing unit is used for establishing a homogeneous transformation matrix of human joint points based on a human static model coordinate system according to the human posture information;
a determining unit, configured to determine coordinates of each joint point by means of matrix multiplication, and determine the joint point as a feature point of a human body posture;
the conversion unit is used for obtaining Euler angle parameters of all the characteristic points; determining an attitude matrix based on each characteristic point under a human body static model coordinate system according to the Euler angle parameters;
The processing unit is used for processing the gesture matrix and the acquired emotion vector information by utilizing a pre-trained emotion intensity model to obtain emotion intensity data, the emotion intensity model is of a double-layer LSTM-RNN structure, the double-layer LSTM-RNN structure is formed by transversely connecting n LSTM structures, the gesture matrix comprises n gesture matrixes, each gesture matrix corresponds to one frame of human body gesture image, and the emotion vector information is obtained by searching based on facial expression feature vectors;
The feedback unit is used for searching and feeding back the corresponding emotion type according to the emotion intensity data;
the apparatus further comprises:
The identification unit is used for processing the gesture matrix and the acquired emotion vector information by utilizing a pre-trained emotion intensity model, and identifying and processing the acquired facial expression information by utilizing a preset facial identification algorithm before acquiring emotion intensity data to acquire corresponding emotion vector information;
the apparatus further comprises:
the training unit is used for processing the gesture matrix and the acquired emotion vector information by utilizing the pre-trained emotion intensity model, and training the emotion intensity model according to the RNN-LSTM model, the sample gesture data and the preset emotion gesture label before obtaining the emotion intensity data.
4. A storage medium having stored thereon a computer program, the storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the emotion information identification method of any of claims 1-2.
5. A computer device comprising a processor, a memory, a communication interface and a communication bus, said processor, said memory and said communication interface completing communication with each other via said communication bus, said memory for storing at least one executable instruction for causing said processor to perform the emotion information identification corresponding operation of any of claims 1-2.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010349534.0A CN111680550B (en) | 2020-04-28 | 2020-04-28 | Emotion information identification method and device, storage medium and computer equipment |
PCT/CN2020/111036 WO2021217973A1 (en) | 2020-04-28 | 2020-08-25 | Emotion information recognition method and apparatus, and storage medium and computer device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010349534.0A CN111680550B (en) | 2020-04-28 | 2020-04-28 | Emotion information identification method and device, storage medium and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111680550A CN111680550A (en) | 2020-09-18 |
CN111680550B true CN111680550B (en) | 2024-06-04 |
Family
ID=72452275
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010349534.0A Active CN111680550B (en) | 2020-04-28 | 2020-04-28 | Emotion information identification method and device, storage medium and computer equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111680550B (en) |
WO (1) | WO2021217973A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022141894A1 (en) * | 2020-12-31 | 2022-07-07 | 苏州源想理念文化发展有限公司 | Three-dimensional feature emotion analysis method capable of fusing expression and limb motion |
CN113255557B (en) * | 2021-06-08 | 2023-08-15 | 苏州优柿心理咨询技术有限公司 | Deep learning-based video crowd emotion analysis method and system |
CN114863548B (en) * | 2022-03-22 | 2024-05-31 | 天津大学 | Emotion recognition method and device based on nonlinear space characteristics of human body movement gestures |
CN114998834A (en) * | 2022-06-06 | 2022-09-02 | 杭州中威电子股份有限公司 | Medical warning system based on face image and emotion recognition |
CN115131876B (en) * | 2022-07-13 | 2024-10-29 | 中国科学技术大学 | Emotion recognition method and system based on human body movement gait and posture |
CN115937943A (en) * | 2022-12-09 | 2023-04-07 | 中巡壹(江苏)智能科技有限公司 | Robot vision system based on emotion calculation |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160095735A (en) * | 2015-02-04 | 2016-08-12 | 단국대학교 천안캠퍼스 산학협력단 | Method and system for complex and multiplex emotion recognition of user face |
CN106803098A (en) * | 2016-12-28 | 2017-06-06 | 南京邮电大学 | A kind of three mode emotion identification methods based on voice, expression and attitude |
CN108596039A (en) * | 2018-03-29 | 2018-09-28 | 南京邮电大学 | A kind of bimodal emotion recognition method and system based on 3D convolutional neural networks |
CN108805087A (en) * | 2018-06-14 | 2018-11-13 | 南京云思创智信息科技有限公司 | Semantic temporal fusion association based on multi-modal Emotion identification system judges subsystem |
CN109145754A (en) * | 2018-07-23 | 2019-01-04 | 上海电力学院 | Merge the Emotion identification method of facial expression and limb action three-dimensional feature |
CN109684911A (en) * | 2018-10-30 | 2019-04-26 | 百度在线网络技术(北京)有限公司 | Expression recognition method, device, electronic equipment and storage medium |
CN109815938A (en) * | 2019-02-27 | 2019-05-28 | 南京邮电大学 | Multi-modal affective characteristics recognition methods based on multiclass kernel canonical correlation analysis |
CN110147729A (en) * | 2019-04-16 | 2019-08-20 | 深圳壹账通智能科技有限公司 | User emotion recognition methods, device, computer equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105868694B (en) * | 2016-03-24 | 2019-03-08 | 中国地质大学(武汉) | The bimodal emotion recognition method and system acted based on facial expression and eyeball |
CN111401116B (en) * | 2019-08-13 | 2022-08-26 | 南京邮电大学 | Bimodal emotion recognition method based on enhanced convolution and space-time LSTM network |
-
2020
- 2020-04-28 CN CN202010349534.0A patent/CN111680550B/en active Active
- 2020-08-25 WO PCT/CN2020/111036 patent/WO2021217973A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160095735A (en) * | 2015-02-04 | 2016-08-12 | 단국대학교 천안캠퍼스 산학협력단 | Method and system for complex and multiplex emotion recognition of user face |
CN106803098A (en) * | 2016-12-28 | 2017-06-06 | 南京邮电大学 | A kind of three mode emotion identification methods based on voice, expression and attitude |
CN108596039A (en) * | 2018-03-29 | 2018-09-28 | 南京邮电大学 | A kind of bimodal emotion recognition method and system based on 3D convolutional neural networks |
CN108805087A (en) * | 2018-06-14 | 2018-11-13 | 南京云思创智信息科技有限公司 | Semantic temporal fusion association based on multi-modal Emotion identification system judges subsystem |
CN109145754A (en) * | 2018-07-23 | 2019-01-04 | 上海电力学院 | Merge the Emotion identification method of facial expression and limb action three-dimensional feature |
CN109684911A (en) * | 2018-10-30 | 2019-04-26 | 百度在线网络技术(北京)有限公司 | Expression recognition method, device, electronic equipment and storage medium |
CN109815938A (en) * | 2019-02-27 | 2019-05-28 | 南京邮电大学 | Multi-modal affective characteristics recognition methods based on multiclass kernel canonical correlation analysis |
CN110147729A (en) * | 2019-04-16 | 2019-08-20 | 深圳壹账通智能科技有限公司 | User emotion recognition methods, device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111680550A (en) | 2020-09-18 |
WO2021217973A1 (en) | 2021-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111680550B (en) | Emotion information identification method and device, storage medium and computer equipment | |
CN112800903B (en) | Dynamic expression recognition method and system based on space-time diagram convolutional neural network | |
US12039454B2 (en) | Microexpression-based image recognition method and apparatus, and related device | |
CN109948475B (en) | Human body action recognition method based on skeleton features and deep learning | |
CN109815826B (en) | Method and device for generating face attribute model | |
WO2019174439A1 (en) | Image recognition method and apparatus, and terminal and storage medium | |
CN112766160A (en) | Face replacement method based on multi-stage attribute encoder and attention mechanism | |
Sincan et al. | Using motion history images with 3d convolutional networks in isolated sign language recognition | |
Geetha et al. | A vision based dynamic gesture recognition of indian sign language on kinect based depth images | |
CN112329525A (en) | Gesture recognition method and device based on space-time diagram convolutional neural network | |
CN111108508B (en) | Face emotion recognition method, intelligent device and computer readable storage medium | |
Rao et al. | Sign Language Recognition System Simulated for Video Captured with Smart Phone Front Camera. | |
CN107025678A (en) | A kind of driving method and device of 3D dummy models | |
Santhalingam et al. | Sign language recognition analysis using multimodal data | |
CN112183198A (en) | Gesture recognition method for fusing body skeleton and head and hand part profiles | |
CN110909680A (en) | Facial expression recognition method and device, electronic equipment and storage medium | |
CN112906520A (en) | Gesture coding-based action recognition method and device | |
CN111444488A (en) | Identity authentication method based on dynamic gesture | |
Neverova | Deep learning for human motion analysis | |
Kumar et al. | Mediapipe and cnns for real-time asl gesture recognition | |
CN111797705A (en) | Action recognition method based on character relation modeling | |
CN111274854A (en) | Human body action recognition method and vision enhancement processing system | |
Srininvas et al. | A framework to recognize the sign language system for deaf and dumb using mining techniques | |
Kumar et al. | Facial emotion recognition and detection using cnn | |
JP2022095332A (en) | Learning model generation method, computer program and information processing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |