CN105512624B - A kind of smiling face's recognition methods of facial image and its device - Google Patents
A kind of smiling face's recognition methods of facial image and its device Download PDFInfo
- Publication number
- CN105512624B CN105512624B CN201510868158.5A CN201510868158A CN105512624B CN 105512624 B CN105512624 B CN 105512624B CN 201510868158 A CN201510868158 A CN 201510868158A CN 105512624 B CN105512624 B CN 105512624B
- Authority
- CN
- China
- Prior art keywords
- face
- convolutional neural
- neural networks
- image
- expressive features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000001815 facial effect Effects 0.000 title claims abstract description 104
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 87
- 238000012549 training Methods 0.000 claims abstract description 35
- 238000007781 pre-processing Methods 0.000 claims abstract description 19
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 239000000284 extract Substances 0.000 claims abstract description 7
- 230000014509 gene expression Effects 0.000 claims description 62
- 230000008921 facial expression Effects 0.000 claims description 31
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 230000001537 neural effect Effects 0.000 claims description 3
- 238000003475 lamination Methods 0.000 claims 2
- 230000000694 effects Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 10
- 230000004913 activation Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 9
- 238000011478 gradient descent method Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 210000004709 eyebrow Anatomy 0.000 description 4
- 210000000887 face Anatomy 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 3
- 230000001105 regulatory effect Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011017 operating method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of smiling face's identification devices of facial image, include: image recognition extraction unit, for multiple character images for needing smiling face to identify, detects the position of face, and identify and extract facial image therein, it is then sent to image pre-processing unit;Image pre-processing unit is connected with image recognition extraction unit;Network establishes unit, for establishing convolutional neural networks;Network training unit establishes unit with network respectively, image pre-processing unit is connected;It identifies judging unit, is connected respectively with network training unit, image pre-processing unit, operated for identifying smiling face.In addition, the invention also discloses a kind of smiling face's recognition methods of facial image.The present invention its can guarantee to face picture carry out high quality smiling face identification while, quickly and efficiently smiling face in a large amount of face picture is carried out accurately identifying judgement, meet user to the requirement of smiling face's identification function, improves user job efficiency, save people's valuable time.
Description
Technical field
The present invention relates to the technical fields such as pattern-recognition and computer vision, more particularly to a kind of smiling face of facial image
Recognition methods and its device.
Background technique
Currently, with the continuous development of human sciences' technology, face recognition technology is more and more general in people's daily life
And no matter in terms of artificial intelligence study or public safety applications, face recognition technology is always a forward position, popular skill
Art possesses very important status.
Wherein, in face recognition technology smiling face identification, be in technical field of computer vision one it is very important in
Hold, along with the growth of the application demands such as face payment, sentiment analysis, medical monitoring, smiling face identifies man-machine as completing
Interactive important component obtains the concern of more and more people, this just promotes global researcher to go into overdrive to carry out smiling face
Identification technology.
Currently, some traditional methods are to extract a variety of low-level features of face first, then pass through complicated fusion
Mode merges a variety of low-level features, is finally sent into classifier and carries out smiling face's classification judgement.However, these craft are set
The low-level features of meter are expressed, and can not be given expression to the expression information contained in face well, and recognition speed is low, be needed to spend
Take the more time, and recognition accuracy is poor, therefore be unfavorable for realizing smiling face's identification, is unable to satisfy user to smiling face's identification function
Demand, the serious use feeling for reducing user.
Therefore, there is an urgent need to develop a kind of technologies out at present, can guarantee to carry out high quality smiling face to face picture
While identification, identification judgement quickly and efficiently is carried out to smiling face in a large amount of face picture, meets user and function is identified to smiling face
The requirement of energy improves the working efficiency of user, saves people's valuable time.
Summary of the invention
It, can be in view of this, smiling face's recognition methods and its device the object of the present invention is to provide a kind of facial image
Guarantee to face picture carry out high quality smiling face identification while, quickly and efficiently to smiling face in a large amount of face picture into
Row accurately identifies judgement, meets requirement of the user to smiling face's identification function, improves the working efficiency of user, saves people's preciousness
Time is conducive to the product use feeling for improving user, is of great practical significance.
For this purpose, the present invention provides a kind of smiling face's recognition methods of facial image, comprising steps of
Step 1: detecting the position of face for multiple character images for needing smiling face to identify, and identify that extraction is therein
Facial image;
Step 2: extracted facial image to be scaled to the facial image of pre-set dimension size, and conversion process is at ash
Degree figure, and the expression label information for the imparting pre-set categories of facial image described in every;
Step 3: establishing convolutional neural networks, the convolutional neural networks include successively carrying out to inputted facial image
The input layer of processing presets multiple convolutional layers, presets multiple full articulamentums and output layer;
Step 4: being trained to the convolutional neural networks, expanding has the multiple of different classes of expression label information
Expressive features otherness between facial image, at the same reduce have the same category expression label information multiple facial images it
Between expressive features otherness;
Step 5: by it is processed at grayscale image and scale pre-set dimension, to smiling face identification every facial image it is all defeated
Enter into the convolutional neural networks for completing training, is extracted the expressive features value of the facial image by the convolutional neural networks and sent
Enter to classifier and carry out smiling face's judgement classification, realizes that smiling face identifies operation.
Wherein, in second step, the human face expression label information of the pre-set categories includes smile label information and non-micro-
Laugh at label information.
Wherein, the convolutional neural networks include an input layer, four convolutional layers, a full articulamentum and an output
Layer.
Wherein, in the 4th step, the step of being trained to the convolutional neural networks specifically:
Any two facial images and its corresponding human face expression label information are input to the convolutional neural networks
Input layer is extracted the expressive features value of this two facial images by the convolutional layer and full articulamentum of the convolutional neural networks,
Then it is exported from output layer;
The expressive features value of this two face pictures is sent into classifier to classify, is had according to this two face pictures
Human face expression label information, calculate obtain this two face pictures expressive features value first-loss value;
Whether the expressive features value for comparing this two face pictures has the people of the same category according to this two face pictures
Face expression label information calculates the second penalty values for obtaining the expressive features value of this two face pictures;
It comes together reversely to adjust using the first-loss value and the second penalty values of the expressive features value of this two face pictures
All weights in the convolutional neural networks complete the training to the convolutional neural networks.
In addition, the present invention also provides a kind of smiling face's identification devices of facial image, comprising:
Image recognition extraction unit detects the position of face for multiple character images for needing smiling face to identify, and
Facial image therein is extracted in identification, is then sent to image pre-processing unit;
Image pre-processing unit is connected with image recognition extraction unit, for extracted facial image to be scaled to
The facial image of pre-set dimension size, and conversion process assigns pre-set categories at grayscale image, while for every facial image
Expression label information, and export to network training unit and identification judging unit;
Network establishes unit, and for establishing convolutional neural networks, the convolutional neural networks include successively to inputted people
Input layer that face image is handled presets multiple convolutional layers, presets multiple full articulamentums and output layer;
Network training unit, establishes unit with network respectively, image pre-processing unit is connected, for the convolution mind
It is trained through network, expands the expressive features difference between multiple facial images with different classes of expression label information
Property, while reducing the expressive features otherness between multiple facial images with the same category expression label information;
It identifies judging unit, is connected respectively with network training unit, image pre-processing unit, for that will locate in advance through image
Every facial image of reason cell processing is all input in the convolutional neural networks for completing training, is mentioned by the convolutional neural networks
It takes the expressive features value of the facial image and is sent to classifier and carry out smiling face's judgement classification, realize that smiling face identifies operation.
Wherein, the human face expression label information of the pre-set categories includes smile label information and non-smile label information.
Wherein, the convolutional neural networks include an input layer, four convolutional layers, a full articulamentum and an output
Layer.
Wherein, the network training unit includes characteristic extracting module, first-loss value obtains module, the second penalty values obtain
Modulus block and reversed adjustment module, in which:
Characteristic extracting module, for any two facial images and its corresponding human face expression label information to be input to institute
The input layer for stating convolutional neural networks is extracted this two face figures by the convolutional layer and full articulamentum of the convolutional neural networks
Then the expressive features value of picture is exported from output layer;
First-loss value obtains module, is connected with characteristic extracting module, for the expression of this two face pictures is special
Value indicative is sent into classifier and classifies, according to the human face expression label information that this two face pictures have, calculate obtain this two
Open the first-loss value of the expressive features value of face picture;
Second penalty values obtain module, are connected with characteristic extracting module, for comparing the expression of this two face pictures
Whether characteristic value has the human face expression label information of the same category according to this two face pictures, calculates and obtains this two people
Second penalty values of the expressive features value of face picture;
Reversed adjustment module, obtains module with first-loss value respectively and the second penalty values obtain module and are connected, and is used for
It comes together reversely to adjust the convolution using the first-loss value and the second penalty values of the expressive features value of this two face pictures
All weights in neural network complete the training to the convolutional neural networks.
By the above technical solution provided by the invention as it can be seen that compared with prior art, the present invention provides a kind of personages
Smiling face's recognition methods of image and its device, utilize constructed convolutional neural networks, and the expressive features for extracting face have been come
It is identified at smiling face, it can be while guaranteeing to carry out high quality smiling face identification to face picture, quickly and efficiently to a large amount of people
Smiling face carries out accurately identifying judgement in face picture, meets requirement of the user to smiling face's identification function, improves the working efficiency of user,
People's valuable time is saved, is conducive to the product use feeling for improving user, is of great practical significance.
Detailed description of the invention
Fig. 1 is a kind of flow chart of smiling face's recognition methods of facial image provided by the invention;
Fig. 2 is in a kind of smiling face's recognition methods of facial image provided by the invention, and the face picture of smile expression is shown
It is intended to;
Fig. 3 is to input normal expression (non-smile table in a kind of smiling face's recognition methods of facial image provided by the invention
Feelings) face picture schematic diagram;
Fig. 4 is in a kind of smiling face's recognition methods of facial image provided by the invention, in constructed convolutional neural networks
A kind of example structure schematic diagram of each component part;
Fig. 5 is to be judged as smile expressive features using a kind of smiling face's recognition methods of facial image provided by the invention
The schematic diagram of facial image;
Fig. 6 is to be judged as normal expression (non-smile using a kind of smiling face's recognition methods of facial image provided by the invention
Expression) feature facial image schematic diagram;
Fig. 7 is a kind of structural block diagram of smiling face's identification device of facial image provided by the invention.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, with reference to the accompanying drawing with embodiment to this
Invention is described in further detail.
Fig. 1 is a kind of flow chart of smiling face's recognition methods of facial image provided by the invention;
Referring to Fig. 1, a kind of smiling face's recognition methods of facial image provided by the invention, comprising the following steps:
Step S101: for multiple character images for needing smiling face to identify, detecting the position of face, and identifies extraction wherein
Facial image;
It should be noted that existing face recognition technology is mainly the relative position and face according to eyes and mouth at present
The approximate shape in portion come judge identify face.A variety of face identification systems are formd by core of face recognition module at present, are had
Have and be widely applied: recognition of face access management system, recognition of face access control and attendance system and recognition of face video monitoring system
System etc..
In the present invention, in specific implementation, can be arranged using two, ear, nose, the face such as eyebrow and mouth as key point
Eyespot, ear point, nose point, eyebrow point and mouth point to detect the position of determining face, and determine the shape and wheel of face in character image
Then exterior feature corresponds to the facial image extracted in character image.
Extracted facial image: being scaled to the facial image of pre-set dimension size by step S102, and conversion process at
Grayscale image;
Step S103: the expression label information of pre-set categories is assigned for facial image described in every;
In the present invention, the pre-set dimension size can be configured in advance according to the needs of users, such as can be
In 48 × 48 pixels to the arbitrary dimension between 256 × 256 pixels, the preferably size of 90 × 90 pixels.
In the present invention, the human face expression label information of the pre-set categories includes smile label information and normal (non-micro-
Laugh at) label information, referring to figs. 2 and 3, the face picture in Fig. 2 and Fig. 3 assigns smile label information and normal (non-micro- respectively
Laugh at) label information.
Step S104: convolutional neural networks (Convolutional Neural Network, CNN), the convolution are established
Neural network includes the input layer successively handled inputted facial image, presets multiple convolutional layers, presets and multiple connect entirely
Layer and output layer are connect, referring to fig. 4;
Step S105: being trained the convolutional neural networks, and expanding has the more of different classes of expression label information
Expressive features otherness between a facial image, while reducing multiple facial images with the same category expression label information
Between expressive features otherness;
Step S106: by it is processed at grayscale image and scale pre-set dimension, to smiling face identification every facial image all
It is input in the convolutional neural networks for completing training, extracts the expressive features value of the facial image simultaneously by the convolutional neural networks
It is sent to classifier and carries out smiling face's judgement classification, finally judge whether the face in facial image is smiling face, realize smiling face's identification
Operation.
It should be noted that for the present invention, the equal random initializtion of all weights of the convolutional neural networks.Wherein,
Referring to fig. 4, the number predefined size of the convolutional layer is between 3 to 7, and preferably 4;The number predefined size of the full articulamentum
Between 1 to 3, preferably 1.
For the present invention, in the convolutional neural networks, the activation primitive of the convolutional layer it is preferable to use ReLU function,
The sub- size of step-length, convolution of each convolutional layer, the number of convolution can be freely arranged, and network structure is as shown in figure 4, Fig. 4
Verbal description be specifically shown in following example.The facial image of the gray scale as input picture, the input of each convolutional layer with
After the multiplied by weight of this layer, a numerical value can be obtained, the principle of ReLU function is exactly, defeated if this numerical value is greater than 0
It is worth out and just saves this calculated value, if this calculated value is less than 0, output valve just saves into 0.Certainly, ReLU function
Other activation primitives can be changed into.
For the present invention, in the convolutional neural networks, it is preferable to use sigmoid activation primitives for full articulamentum, certainly,
Also other activation primitives can be used.The full articulamentum is used to extract the expressive features value of face picture.
For the present invention, it should be noted that the convolutional neural networks are mutual by some layers containing convolution operation
The network structure of composition is connected, main function is the feature in order to extract picture, such as the expressive features value of facial image.
For the convolutional neural networks, wherein the effect for the input layer having is in order to by image data (such as face figure
Picture) it is sent into convolutional neural networks (network structure), in order to subsequent processing;The effect of convolutional layer is the regional area for extracting picture
Feature;The effect of full articulamentum is to extract the feature for having more distinction from upper one layer of output;The effect of output layer is
In order to judge whether face is to smile;Input by last time and the weight between next layer, obtain corresponding output valve.
For the present invention, it should be noted that the step S105 being trained to the convolutional neural networks specifically:
Step S1051: any two facial images and its corresponding human face expression label information are input to the convolution
The input layer of neural network is extracted the table of this two facial images by the convolutional layer and full articulamentum of the convolutional neural networks
Then feelings characteristic value is exported from output layer;
Step S1052: the expressive features value of this two face pictures is sent into classifier and is classified, according to this two people
The human face expression label information that face picture has calculates the first-loss value for obtaining the expressive features value of this two face pictures;
It should be noted that first of the first-loss value for calculating the expressive features value for obtaining this two face pictures
Loss function are as follows:
Wherein, x is the expressive features value that facial image is extracted in step S1051, and p (x) is true human face expression distribution
Probability, q (x) are the probability of prediction.First-loss value can reversely adjust volume in conjunction with the second penalty values in step 1053 together
Whole weights in product neural network.
For the present invention, it should be noted that the effect of classifier is the feature extracted according to front convolutional neural networks,
Classify to the expression classification of face.In specific implementation, the present invention can use softmax classifier.
For softmax classifier, the probability distribution of different expressions can be calculated, is judged according to different probability distribution
Which kind of expression face inputs.It is one is characteristic value that specific operating process, which is the output of preceding layer, and the present invention passes through these spies
Then value indicative is normalized multiplied by different weights, the probability distribution of different expressions can be obtained.
For the present invention, it should be noted that first penalty values of the expressive features value of two face pictures
Effect is calculated according to identification information, is specifically exactly, if the probability results finally judged and legitimate reading ratio
It is closer to, then penalty values are smaller, if last result differs farther out with true result, penalty values are larger;By two kinds of people
Face is respectively fed in convolutional neural networks of the invention, the corresponding two first-loss values of available different faces, and subsequent
After the second penalty values that step obtains combine, it is used to adjust network weight together.
For the present invention, about first-loss function formula, it should be noted that p (x) is known real table mutual affection
Cloth, and q (x) is the prediction table feelings distribution probability calculated according to softmax.
It should be noted that the expression information of human face expression is exactly for the present invention, if this expression is to laugh at,
The probability laughed at is exactly 1, if do not laughed at, the probability that do not laugh at is exactly 1, another probability is exactly 0, that is, obtains above known
Real table feelings distribution p (x);And the probability value judged is a number between 0 to 1, represents the probability of certain expression,
Prediction table feelings distribution probability q (x) above namely.
Step S1053: comparing the expressive features value of this two face pictures, whether has phase according to this two face pictures
Generic human face expression label information calculates the second penalty values for obtaining the expressive features value of this two face pictures;
It should be noted that second of the second penalty values for calculating the expressive features value for obtaining this two face pictures
Loss function are as follows:
Wherein, xiAnd xjIt is to indicate two photos for being input to classifier, their expressive features value is respectively f (xi) and f
(xj), cij=1 indicates that two input photos are same expression, cij=0 indicates that two input photos are different expressions.This second
The purpose of loss function is: if two photos are same expressions, reducing the difference of two photo eigens, if two photos
It is different expression, then increases the difference of two photo eigens, reversely adjusts volume together in conjunction with the loss function in step S1052
Whole weights of product neural network.
It should be noted that the effect of second penalty values is by the feature gap of similar expression for the present invention
Become smaller, while the characteristic distance of different expressions being widened;First calculate the expressive features value of two face pictures, that is, last
The output of a full articulamentum obtains the second penalty values then by the calculation formula of the second loss function above, the inside
f(xi) and f (xj) respectively represent two face pictures.
Calculation formula about the second loss function, it should be noted that f (xi) and f (xj) be convolutional neural networks most
The output of the full articulamentum of the latter is visual human face expression feature schematic diagram referring to Fig. 5, Fig. 6.
Step S1054: together using the first-loss value of the expressive features value of this two face pictures and the second penalty values
Reversely to adjust all weights in the convolutional neural networks, i.e. training of the completion to the convolutional neural networks.
For the present invention, it should be noted that first-loss value is added with the second penalty values the two penalty values and just obtains
Final penalty values;The process for adjusting weight uses conventionally known gradient descent method.
It should be noted that the present invention is for the weight in regulating networks to the purpose of convolutional neural networks training;Tool
Body is to obtain final penalty values by calculating separately first-loss value and the second penalty values, then passes through gradient descent method tune
Save network weight.
It should be noted that above step S1051 to S1054, is based on gradient descent method and back-propagation algorithm training
The convolutional neural networks.
By operating procedure S106, referring to Fig. 5, Fig. 6, a kind of smiling face of facial image provided by the invention is respectively utilized
Recognition methods is judged as the facial image schematic diagram of smile expressive features and the signal of normal expression (non-smile expression) feature
Figure.
It should be noted that for the present invention, using characteristic information in depth convolutional neural networks extraction picture come complete
It is identified at smiling face.This method uses two kinds of labels as supervision message training network, can expand different classes of image simultaneously
The otherness of feature, while increasing the similitude of the characteristics of image of the same category, therefore can extract more with distinctive
Feature realizes better smiling face's identification function, advantageously accounts for smiling face and identifies problem.Present invention utilizes convolutional neural networks tools
There are powerful extraction feature capabilities, the expressive features being extracted in facial image ensure that the accuracy rate of final smiling face's identification.This
The effect of invention is obviously due to traditional smiling face's recognition effect.
For the specific implementation method that the present invention will be described in detail, come using certain smiling face's identification database as embodiment to the present invention
Method is further detailed.The database includes 4000 photos, including different scenes, such as daytime, night, interior, room
It is outer etc., it also include different faces, as male, women, youth, year wait for a long time.In an embodiment of the present invention, it successively runs above-mentioned
Step S101 to S104 establishes the convolutional neural networks for having 4 convolutional layers and 1 full articulamentum, referring to fig. 4, convolution mind
The equal random initializtion of all weights through network.Wherein the activation primitive of convolutional layer is ReLU function, the facial image of feeding
For the picture of 90 × 90 pixel sizes, first layer convolutional layer using 32 having a size of 11 × 11 × 1 convolution;Second layer convolution
Layer is sub having a size of 5 × 5 × 32 convolution using 96;Third layer convolutional layer is sub having a size of 2 × 1 × 96 convolution using 128;
4th layer of convolutional layer is sub having a size of 2 × 1 × 128 convolution using 96;The dimension of the full articulamentum connected below is 160, such as
Shown in Fig. 4.Then successively operating procedure S105, S106, to the facial image in every photo in certain smiling face's identification database
All carry out smiling face and identify judgement, judge face whether smiling face, the final smiling face realized to all photos in smiling face's identification database
Identification.
Smiling face's recognition methods based on a kind of facial image that aforementioned present invention provides, referring to Fig. 7, provided by the invention one
Smiling face's identification device of kind facial image, comprising:
Image recognition extraction unit 701 detects the position of face for multiple character images for needing smiling face to identify
It sets, and identifies and extract facial image therein, be then sent to image pre-processing unit;
Image pre-processing unit 702 is connected with image recognition extraction unit, for scaling extracted facial image
At the facial image of pre-set dimension size, and conversion process assigns default class at grayscale image, while for every facial image
Other expression label information, and export to network training unit 704 and identification judging unit 705;
Network establishes unit 703, and for establishing convolutional neural networks (CNN), the convolutional neural networks include successively right
Input layer that inputted facial image is handled presets multiple convolutional layers, presets multiple full articulamentums and output layer, referring to figure
4;
Network training unit 704, establishes unit 703, image pre-processing unit 702 is connected with network respectively, for pair
The convolutional neural networks are trained, and expand the expression between multiple facial images with different classes of expression label information
Feature difference, while reducing the expressive features difference between multiple facial images with the same category expression label information
Property;
It identifies judging unit 705, is connected respectively with network training unit 704, image pre-processing unit 702, being used for will
The convolutional Neural net for completing training is all input to through every facial image to smiling face's identification that image pre-processing unit 702 is handled
In network, the expressive features value of the facial image is extracted by the convolutional neural networks and is sent to classifier progress smiling face's judgement point
Class finally judges whether the face in facial image is smiling face, realizes that smiling face identifies operation.
In the present invention, described image identification extraction unit 701 can be any one existing face recognition module.
In the present invention, described image pretreatment unit 702, network are established unit 703, network training unit 704 and are known
Other judging unit 705 can be respectively apparatus of the present invention install central processor CPU, digital signal processor DSP or
Single-chip microprocessor MCU.Described image pretreatment unit 702, network establish unit 703, network training unit 704 and identification judging unit
705 can be the device being separately provided, and also can integrate and be set together.
In the present invention, it should be noted that current existing face recognition technology is mainly the phase according to eyes and mouth
The approximate shape of position and face is judged to identify face.A variety of faces are formd by core of face recognition module at present
Identifying system is provided with and is widely applied: recognition of face access management system, recognition of face access control and attendance system and recognition of face
Video monitoring system etc..
It in the present invention,, can be with two, ear, nose, eyebrow and mouth etc. for image recognition extraction unit in specific implementation
Face to detect the position of determining face, and determine people as key point, i.e. setting eyespot, ear point, nose point, eyebrow point and mouth point
Then the shape and profile of face in object image correspond to the facial image extracted in character image.
In the present invention, for image pre-processing unit, the pre-set dimension size can according to the needs of users in advance
It is configured, such as can be in 48 × 48 pixels to the arbitrary dimension between 256 × 256 pixels, preferably 90 × 90 pixels
Size.
In the present invention, the human face expression label information of the pre-set categories includes smile label information and normal (non-micro-
Laugh at) label information, referring to figs. 2 and 3, the face picture in Fig. 2 and Fig. 3 assigns smile label information and normal (non-micro- respectively
Laugh at) label information.
It should be noted that establishing the convolutional neural networks of unit foundation, the convolution mind for network for the present invention
The equal random initializtion of all weights through network.Wherein, referring to fig. 4, the number predefined size of the convolutional layer is between 3 to 7,
Preferably 4;The number predefined size of the full articulamentum is between 1 to 3 preferably 1.
For the present invention, in the convolutional neural networks, the activation primitive of the convolutional layer it is preferable to use ReLU function,
The sub- size of step-length, convolution of each convolutional layer, the number of convolution can be freely arranged, and network structure is as shown in figure 4, Fig. 4
Verbal description be specifically shown in following example.The facial image of the gray scale as input picture, the input of each convolutional layer with
After the multiplied by weight of this layer, a numerical value can be obtained, the principle of ReLU function is exactly, defeated if this numerical value is greater than 0
It is worth out and just saves this calculated value, if this calculated value is less than 0, output valve just saves into 0.Certainly, ReLU function
Other activation primitives can be changed into.
For the present invention, in the convolutional neural networks, it is preferable to use sigmoid activation primitives for full articulamentum, certainly,
Also other activation primitives can be used.The full articulamentum is used to extract the expressive features value of face picture.
For the present invention, it should be noted that the convolutional neural networks are mutual by some layers containing convolution operation
The network structure of composition is connected, main function is the feature in order to extract picture, such as the expressive features value of facial image.
For the convolutional neural networks, wherein the effect for the input layer having is in order to by image data (such as face figure
Picture) it is sent into convolutional neural networks (network structure), in order to subsequent processing;The effect of convolutional layer is the regional area for extracting picture
Feature;The effect of full articulamentum is to extract the feature for having more distinction from upper one layer of output;The effect of output layer is
In order to judge whether face is to smile;Input by last time and the weight between next layer, obtain corresponding output valve.
For the present invention, it should be noted that the network training unit 704 is used to carry out the convolutional neural networks
Training specifically includes characteristic extracting module, first-loss value obtains module, the second penalty values obtain module and reversely adjusts mould
Block, in which:
Characteristic extracting module, for any two facial images and its corresponding human face expression label information to be input to institute
The input layer for stating convolutional neural networks is extracted this two face figures by the convolutional layer and full articulamentum of the convolutional neural networks
Then the expressive features value of picture is exported from output layer;
First-loss value obtains module, is connected with characteristic extracting module, for the expression of this two face pictures is special
Value indicative is sent into classifier and classifies, according to the human face expression label information that this two face pictures have, calculate obtain this two
Open the first-loss value of the expressive features value of face picture;
It should be noted that first of the first-loss value for calculating the expressive features value for obtaining this two face pictures
Loss function are as follows:
Wherein, x is the expressive features value that facial image is extracted in step S1051, and p (x) is true human face expression distribution
Probability, q (x) are the probability of prediction.First-loss value can reversely adjust volume in conjunction with the second penalty values in step 1053 together
Whole weights in product neural network.
For the present invention, it should be noted that the effect of classifier is the feature extracted according to front convolutional neural networks,
Classify to the expression classification of face.In specific implementation, the present invention can use softmax classifier.
For softmax classifier, the probability distribution of different expressions can be calculated, is judged according to different probability distribution
Which kind of expression face inputs.It is one is characteristic value that specific operating process, which is the output of preceding layer, by multiplying these characteristic values
It is then normalized with different weights, the probability distribution of different expressions can be obtained.
For the present invention, it should be noted that first penalty values of the expressive features value of two face pictures
Effect is calculated according to identification information, is specifically exactly, if the probability results finally judged and legitimate reading ratio
It is closer to, then penalty values are smaller, if last result differs farther out with true result, penalty values are larger;By two kinds of people
Face is respectively fed in convolutional neural networks of the invention, the corresponding two first-loss values of available different faces, and subsequent
After the second penalty values that step obtains combine, it is used to adjust network weight together.
For the present invention, about first-loss function formula, it should be noted that p (x) is known real table mutual affection
Cloth, and q (x) is the prediction table feelings distribution probability calculated according to softmax.
It should be noted that the expression information of human face expression is exactly for the present invention, if this expression is to laugh at,
The probability laughed at is exactly 1, if do not laughed at, the probability that do not laugh at is exactly 1, another probability is exactly 0, that is, obtains above known
Real table feelings distribution p (x);And the probability value judged is a number between 0 to 1, represents the probability of certain expression,
Prediction table feelings distribution probability q (x) above namely.
Second penalty values obtain module, are connected with characteristic extracting module, for comparing the expression of this two face pictures
Whether characteristic value has the human face expression label information of the same category according to this two face pictures, calculates and obtains this two people
Second penalty values of the expressive features value of face picture;
It should be noted that second of the second penalty values for calculating the expressive features value for obtaining this two face pictures
Loss function are as follows:
Wherein, xiAnd xjIt is to indicate two photos for being input to classifier, their expressive features value is respectively f (xi) and f
(xj), cij=1 indicates that two input photos are same expression, cij=0 indicates that two input photos are different expressions.This second
The purpose of loss function is: if two photos are same expressions, reducing the difference of two photo eigens, if two photos
It is different expression, then increases the difference of two photo eigens, reversely adjusts volume together in conjunction with the loss function in step S1052
Whole weights of product neural network.
It should be noted that the effect of second penalty values is by the feature gap of similar expression for the present invention
Become smaller, while the characteristic distance of different expressions being widened;First calculate the expressive features value of two face pictures, that is, last
The output of a full articulamentum obtains the second penalty values then by the calculation formula of the second loss function above, the inside
f(xi) and f (xj) two face pictures are respectively represented, it is visual human face expression feature schematic diagram referring to Fig. 5, Fig. 6.
Calculation formula about the second loss function, it should be noted that f (xi) and f (xj) be convolutional neural networks most
The output of the full articulamentum of the latter.
Reversed adjustment module, obtains module with first-loss value respectively and the second penalty values obtain module and are connected, and is used for
It comes together reversely to adjust the convolution using the first-loss value and the second penalty values of the expressive features value of this two face pictures
All weights in neural network complete the training to the convolutional neural networks.
For the present invention, it should be noted that first-loss value is added with the second penalty values the two penalty values and just obtains
Final penalty values;The process for adjusting weight uses conventionally known gradient descent method.
The present invention is for the weight in regulating networks to the purpose of convolutional neural networks training;Particular by counting respectively
First-loss value and the second penalty values are calculated, final penalty values are obtained, then pass through gradient descent method regulating networks weight.
It should be noted that the network training unit 704 runs the above step first step to the 4th step, it is based on gradient
Descent method and back-propagation algorithm train the convolutional neural networks.
Referring to Fig. 5, Fig. 6, a kind of smiling face's recognition methods of facial image provided by the invention is respectively utilized, is judged as micro-
Laugh at the facial image schematic diagram of expressive features and the schematic diagram of normal expression (non-smile expression) feature.
It should be noted that for the present invention, using characteristic information in depth convolutional neural networks extraction picture come complete
It is identified at smiling face.This method uses two kinds of labels as supervision message training network, can expand different classes of image simultaneously
The otherness of feature, while increasing the similitude of the characteristics of image of the same category, therefore can extract more with distinctive
Feature realizes better smiling face's identification function, advantageously accounts for smiling face and identifies problem.Present invention utilizes convolutional neural networks tools
There are powerful extraction feature capabilities, the expressive features being extracted in facial image ensure that the accuracy rate of final smiling face's identification.This
The effect of invention is obviously due to traditional smiling face's recognition effect.
In conclusion compared with prior art, the present invention provides a kind of smiling face's recognition methods of facial image and its
Device, utilize constructed convolutional neural networks, extract the expressive features of face complete smiling face identification, can guarantee pair
While face picture carries out high quality smiling face identification, quickly and efficiently smiling face in a large amount of face picture is accurately known
Do not judge, meet requirement of the user to smiling face's identification function, improve the working efficiency of user, saves people's valuable time, have
Conducive to the product use feeling for improving user, it is of great practical significance.
By using technology provided by the invention, the convenience of people's work and life can be made to obtain very big mention
Height greatly improves people's lives level.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (6)
1. a kind of smiling face's recognition methods of facial image, which is characterized in that comprising steps of
Step 1: detecting the position of face for multiple character images for needing smiling face to identify, and identifies and extract face therein
Image;
Step 2: extracted facial image to be scaled to the facial image of pre-set dimension size, and conversion process is at grayscale image,
And the expression label information of pre-set categories is assigned for facial image described in every;
Step 3: establishing convolutional neural networks, the convolutional neural networks include successively handling inputted facial image
Input layer, preset multiple convolutional layers, preset multiple full articulamentums and output layer;
Step 4: being trained to the convolutional neural networks, expand multiple faces with different classes of expression label information
Expressive features otherness between image, while reducing between multiple facial images with the same category expression label information
Expressive features otherness;Step 5: by it is processed at grayscale image and scale pre-set dimension, to smiling face identification every face figure
As being all input in the convolutional neural networks for completing training, the expressive features of the facial image are extracted by the convolutional neural networks
It is worth and is sent to classifier and carry out smiling face's judgement classification, realizes that smiling face identifies operation;
In the 4th step, the step of being trained to the convolutional neural networks specifically:
Any two facial images and its corresponding human face expression label information are input to the input of the convolutional neural networks
Layer, the expressive features value of this two facial images is extracted by the convolutional layer and full articulamentum of the convolutional neural networks, then
It is exported from output layer;
The expressive features value of this two face pictures is sent into classifier to classify, has somebody according to this two face pictures
Face expression label information calculates the first-loss value for obtaining the expressive features value of this two face pictures;Wherein, for calculating the
The first-loss function of one penalty values are as follows:
Wherein, x is the expressive features value for extracting facial image, and p (x) is true people
Face expression distribution probability, q (x) are the probability of prediction;
Whether the expressive features value for comparing this two face pictures has the face table of the same category according to this two face pictures
Feelings label information calculates the second penalty values for obtaining the expressive features value of this two face pictures;Wherein, for calculating the second damage
Second loss function of mistake value are as follows:
Wherein, xiAnd xjIt is to indicate two photos for being input to classifier, their expressive features value is respectively f (xi) and f (xj),
cij=1 indicates that two input photos are same expression, cij=0 indicates that two input photos are different expressions;
It is come together described in reversely adjusting using the first-loss value and the second penalty values of the expressive features value of this two face pictures
All weights in convolutional neural networks complete the training to the convolutional neural networks.
2. the method as described in claim 1, which is characterized in that in second step, the human face expression label of the pre-set categories
Information includes smile label information and non-smile label information.
3. the method as described in claim 1, which is characterized in that the convolutional neural networks include an input layer, four volumes
Lamination, a full articulamentum and an output layer.
4. a kind of smiling face's identification device of facial image characterized by comprising
Image recognition extraction unit detects the position of face, and identify for multiple character images for needing smiling face to identify
Facial image therein is extracted, image pre-processing unit is then sent to;
Image pre-processing unit is connected with image recognition extraction unit, for extracted facial image to be scaled to preset
The facial image of size, and conversion process is at grayscale image, while the table of pre-set categories is assigned for every facial image
Feelings label information, and export to network training unit and identification judging unit;
Network establishes unit, and for establishing convolutional neural networks, the convolutional neural networks include successively to inputted face figure
As handled input layer, preset multiple convolutional layers, preset multiple full articulamentums and output layer;
Network training unit, establishes unit with network respectively, image pre-processing unit is connected, for the convolutional Neural net
Network is trained, and expands the expressive features otherness between multiple facial images with different classes of expression label information, together
When reduce have the same category expression label information multiple facial images between expressive features otherness;
It identifies judging unit, is connected respectively with network training unit, image pre-processing unit, being used for will be through image preprocessing list
Every facial image of member processing is all input in the convolutional neural networks for completing training, and being extracted by the convolutional neural networks should
The expressive features value of facial image is simultaneously sent to classifier progress smiling face's judgement classification, realizes that smiling face identifies and operates;
The network training unit include characteristic extracting module, first-loss value obtain module, the second penalty values obtain module and
Reversed adjustment module, in which:
Characteristic extracting module, for any two facial images and its corresponding human face expression label information to be input to the volume
The input layer of product neural network, this two facial images are extracted by the convolutional layer and full articulamentum of the convolutional neural networks
Then expressive features value is exported from output layer;
First-loss value obtains module, is connected with characteristic extracting module, for by the expressive features value of this two face pictures
It is sent into classifier to classify, according to the human face expression label information that this two face pictures have, calculates and obtain this two people
The first-loss value of the expressive features value of face picture;
Wherein, for calculating the first-loss function of first-loss value are as follows:
Wherein, x is the expressive features value for extracting facial image, and p (x) is true people
Face expression probability, q (x) are the probability of prediction;
Second penalty values obtain module, are connected with characteristic extracting module, for comparing the expressive features of this two face pictures
Whether value has the human face expression label information of the same category according to this two face pictures, calculates and obtains this two face figures
Second penalty values of the expressive features value of piece;Wherein, for calculating the second loss function of the second penalty values are as follows:
Wherein, xiAnd xjIt is to indicate two photos for being input to classifier, their expressive features value is respectively f (xi) and f (xj),
cij=1 indicates that two input photos are same expression, cij=0 indicates that two input photos are different expressions;
Reversed adjustment module, obtains module with first-loss value respectively and the second penalty values obtain module and are connected, for using
The first-loss value and the second penalty values of the expressive features value of this two face pictures come together reversely to adjust the convolutional Neural
All weights in network complete the training to the convolutional neural networks.
5. device as claimed in claim 4, which is characterized in that the human face expression label information of the pre-set categories includes smiling
Label information and non-smile label information.
6. device as claimed in claim 4, which is characterized in that the convolutional neural networks include an input layer, four volumes
Lamination, a full articulamentum and an output layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510868158.5A CN105512624B (en) | 2015-12-01 | 2015-12-01 | A kind of smiling face's recognition methods of facial image and its device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510868158.5A CN105512624B (en) | 2015-12-01 | 2015-12-01 | A kind of smiling face's recognition methods of facial image and its device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105512624A CN105512624A (en) | 2016-04-20 |
CN105512624B true CN105512624B (en) | 2019-06-21 |
Family
ID=55720591
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510868158.5A Active CN105512624B (en) | 2015-12-01 | 2015-12-01 | A kind of smiling face's recognition methods of facial image and its device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105512624B (en) |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105956570B (en) * | 2016-05-11 | 2019-08-20 | 电子科技大学 | Smiling face's recognition methods based on lip feature and deep learning |
CN106295678B (en) * | 2016-07-27 | 2020-03-06 | 北京旷视科技有限公司 | Neural network training and constructing method and device and target detection method and device |
CN106372651B (en) * | 2016-08-22 | 2018-03-06 | 平安科技(深圳)有限公司 | The detection method and device of picture quality |
CN106341724A (en) * | 2016-08-29 | 2017-01-18 | 刘永娜 | Expression image marking method and system |
CN107871099A (en) * | 2016-09-23 | 2018-04-03 | 北京眼神科技有限公司 | Face detection method and apparatus |
CN107871098B (en) * | 2016-09-23 | 2021-04-13 | 北京眼神科技有限公司 | Method and device for acquiring human face characteristic points |
CN106548159A (en) * | 2016-11-08 | 2017-03-29 | 中国科学院自动化研究所 | Reticulate pattern facial image recognition method and device based on full convolutional neural networks |
CN108154222B (en) * | 2016-12-02 | 2020-08-11 | 北京市商汤科技开发有限公司 | Deep neural network training method and system and electronic equipment |
CN106529504B (en) * | 2016-12-02 | 2019-05-31 | 合肥工业大学 | A kind of bimodal video feeling recognition methods of compound space-time characteristic |
WO2018099473A1 (en) | 2016-12-02 | 2018-06-07 | 北京市商汤科技开发有限公司 | Scene analysis method and system, and electronic device |
CN106778664B (en) * | 2016-12-29 | 2020-12-15 | 天津中科智能识别产业技术研究院有限公司 | Iris image iris area segmentation method and device |
CN107133578B (en) * | 2017-04-19 | 2020-05-22 | 华南理工大学 | Facial expression recognition method and system based on file transmission |
CN107358169A (en) * | 2017-06-21 | 2017-11-17 | 厦门中控智慧信息技术有限公司 | A kind of facial expression recognizing method and expression recognition device |
CN107633203A (en) * | 2017-08-17 | 2018-01-26 | 平安科技(深圳)有限公司 | Facial emotions recognition methods, device and storage medium |
CN107506722A (en) * | 2017-08-18 | 2017-12-22 | 中国地质大学(武汉) | One kind is based on depth sparse convolution neutral net face emotion identification method |
CN107590457A (en) * | 2017-09-07 | 2018-01-16 | 竹间智能科技(上海)有限公司 | Emotion identification method and system based on the cascade change network architecture |
CN107944458A (en) * | 2017-12-08 | 2018-04-20 | 北京维大成科技有限公司 | A kind of image-recognizing method and device based on convolutional neural networks |
CN109034264B (en) * | 2018-08-15 | 2021-11-19 | 云南大学 | CSP-CNN model for predicting severity of traffic accident and modeling method thereof |
CN109360183B (en) * | 2018-08-20 | 2021-05-11 | 中国电子进出口有限公司 | Face image quality evaluation method and system based on convolutional neural network |
CN109684911B (en) * | 2018-10-30 | 2021-05-11 | 百度在线网络技术(北京)有限公司 | Expression recognition method and device, electronic equipment and storage medium |
CN110059625B (en) * | 2019-04-18 | 2023-04-07 | 重庆大学 | Face training and recognition method based on mixup |
CN110175583A (en) * | 2019-05-30 | 2019-08-27 | 重庆跃途科技有限公司 | It is a kind of in the campus universe security monitoring analysis method based on video AI |
CN110458021A (en) * | 2019-07-10 | 2019-11-15 | 上海交通大学 | A kind of face moving cell detection method based on physical characteristic and distribution character |
CN110472509B (en) * | 2019-07-15 | 2024-04-26 | 中国平安人寿保险股份有限公司 | Fat-lean recognition method and device based on face image and electronic equipment |
CN110866962B (en) * | 2019-11-20 | 2023-06-16 | 成都威爱新经济技术研究院有限公司 | Virtual portrait and expression synchronization method based on convolutional neural network |
CN111160453B (en) * | 2019-12-27 | 2024-06-21 | 联想(北京)有限公司 | Information processing method, equipment and computer readable storage medium |
CN111079712B (en) * | 2019-12-31 | 2023-04-21 | 中国银行股份有限公司 | Permission management method and device based on face recognition |
CN111314760B (en) * | 2020-01-19 | 2021-07-02 | 深圳市爱深盈通信息技术有限公司 | Television and smiling face shooting method thereof |
CN111652242B (en) * | 2020-04-20 | 2023-07-04 | 北京迈格威科技有限公司 | Image processing method, device, electronic equipment and storage medium |
CN114360012A (en) * | 2021-12-29 | 2022-04-15 | 中南大学 | Method and system for evaluating face test smile |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103793718A (en) * | 2013-12-11 | 2014-05-14 | 台州学院 | Deep study-based facial expression recognition method |
CN105005774A (en) * | 2015-07-28 | 2015-10-28 | 中国科学院自动化研究所 | Face relative relation recognition method based on convolutional neural network and device thereof |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101760258B1 (en) * | 2010-12-21 | 2017-07-21 | 삼성전자주식회사 | Face recognition apparatus and method thereof |
-
2015
- 2015-12-01 CN CN201510868158.5A patent/CN105512624B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103793718A (en) * | 2013-12-11 | 2014-05-14 | 台州学院 | Deep study-based facial expression recognition method |
CN105005774A (en) * | 2015-07-28 | 2015-10-28 | 中国科学院自动化研究所 | Face relative relation recognition method based on convolutional neural network and device thereof |
Non-Patent Citations (1)
Title |
---|
基于DWT和BP神经网络的人脸识别方法;徐奔;《电脑知识与技术》;20090815;第5卷(第23期);第6520-6522页 |
Also Published As
Publication number | Publication date |
---|---|
CN105512624A (en) | 2016-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105512624B (en) | A kind of smiling face's recognition methods of facial image and its device | |
CN104992167B (en) | A kind of method for detecting human face and device based on convolutional neural networks | |
WO2022111236A1 (en) | Facial expression recognition method and system combined with attention mechanism | |
CN114220035A (en) | Rapid pest detection method based on improved YOLO V4 | |
CN104361316B (en) | Dimension emotion recognition method based on multi-scale time sequence modeling | |
CN105512638B (en) | A kind of Face datection and alignment schemes based on fusion feature | |
JP6788264B2 (en) | Facial expression recognition method, facial expression recognition device, computer program and advertisement management system | |
CN111597955A (en) | Smart home control method and device based on expression emotion recognition of deep learning | |
CN102567716B (en) | Face synthetic system and implementation method | |
CN102938065B (en) | Face feature extraction method and face identification method based on large-scale image data | |
CN105373777B (en) | A kind of method and device for recognition of face | |
CN105005774A (en) | Face relative relation recognition method based on convolutional neural network and device thereof | |
CN109815867A (en) | A kind of crowd density estimation and people flow rate statistical method | |
CN106803069A (en) | Crowd's level of happiness recognition methods based on deep learning | |
CN107688784A (en) | A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features | |
CN105303195B (en) | A kind of bag of words image classification method | |
CN110781829A (en) | Light-weight deep learning intelligent business hall face recognition method | |
CN105426850A (en) | Human face identification based related information pushing device and method | |
CN113298018A (en) | False face video detection method and device based on optical flow field and facial muscle movement | |
CN105956570B (en) | Smiling face's recognition methods based on lip feature and deep learning | |
CN109101108A (en) | Method and system based on three decision optimization intelligence cockpit human-computer interaction interfaces | |
CN110111365B (en) | Training method and device based on deep learning and target tracking method and device | |
CN104143091A (en) | Single-sample face recognition method based on improved mLBP | |
CN115240119A (en) | Pedestrian small target detection method in video monitoring based on deep learning | |
CN110363156A (en) | A kind of Facial action unit recognition methods that posture is unrelated |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: 300457 unit 1001, block 1, msd-g1, TEDA, No.57, 2nd Street, Binhai New Area Economic and Technological Development Zone, Tianjin Patentee after: Tianjin Zhongke intelligent identification Co.,Ltd. Address before: 300457 No. 57, Second Avenue, Economic and Technological Development Zone, Binhai New Area, Tianjin Patentee before: TIANJIN ZHONGKE INTELLIGENT IDENTIFICATION INDUSTRY TECHNOLOGY RESEARCH INSTITUTE Co.,Ltd. |