[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112633221A - Face direction detection method and related device - Google Patents

Face direction detection method and related device Download PDF

Info

Publication number
CN112633221A
CN112633221A CN202011610824.2A CN202011610824A CN112633221A CN 112633221 A CN112633221 A CN 112633221A CN 202011610824 A CN202011610824 A CN 202011610824A CN 112633221 A CN112633221 A CN 112633221A
Authority
CN
China
Prior art keywords
face
value
module
detection
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011610824.2A
Other languages
Chinese (zh)
Other versions
CN112633221B (en
Inventor
唐健
祝严刚
黄海波
陶昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jieshun Science and Technology Industry Co Ltd
Original Assignee
Shenzhen Jieshun Science and Technology Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jieshun Science and Technology Industry Co Ltd filed Critical Shenzhen Jieshun Science and Technology Industry Co Ltd
Priority to CN202011610824.2A priority Critical patent/CN112633221B/en
Publication of CN112633221A publication Critical patent/CN112633221A/en
Application granted granted Critical
Publication of CN112633221B publication Critical patent/CN112633221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method and a related device for detecting the direction of a human face, which are used for achieving the aim of human face recognition under the condition that a normal face picture of an input system is irregular in direction. The method in the embodiment of the application comprises the following steps: acquiring a pre-detected face image; inputting the pre-detected face image into a target model, wherein the target model is obtained by training key point characteristic information and direction characteristic information of images in a learning sample set; outputting probability values of the detection data output by the target model through a human face direction judging module, wherein the probability values are probability values of human faces in all directions in the pre-detection human face image; determining the main direction information of the face according to the probability value; outputting face auxiliary direction information from the detection data output by the target model through a face key point module; and determining the face direction of the pre-detected face image according to the face main direction information and the face auxiliary direction information.

Description

Face direction detection method and related device
Technical Field
The embodiment of the application relates to the field of intelligent monitoring, in particular to a face direction detection method and a related device.
Background
With the pace of smart city construction acceleration and the breakthrough of new technologies such as face recognition and artificial intelligence, the development trend of intelligent face recognition is gradually favored by various industries.
At present, the application of the face recognition market forms product service combining software and hardware, and gradually develops mainly towards the direction of integrating software and hardware, such as a face recognition camera with a recognition function, a face recognition attendance machine, a face recognition gate machine, a human evidence comparison all-in-one machine and the like. Face recognition relies on high quality front face pictures, but in general, pictures sent to a device or system are often rotated 90 degrees clockwise, 180 degrees clockwise, 270 degrees clockwise, and so on for various reasons. The face picture in this case is not face-recognizable.
Disclosure of Invention
The embodiment of the application provides a method and a related device for detecting the direction of a human face, which are used for achieving the aim of identifying the human face under the condition that a normal face picture of an input system is irregular in direction.
The present application provides, in a first aspect, a method for detecting a face direction, including:
acquiring a pre-detected face image;
inputting the pre-detected face image into a target model, wherein the target model is obtained by training key point characteristic information and direction characteristic information of images in a learning sample set;
outputting probability values of the detection data output by the target model through a human face direction judging module, wherein the probability values are probability values of human faces in all directions in the pre-detection human face image;
determining the main direction information of the face according to the probability value;
outputting face auxiliary direction information from the detection data output by the target model through a face key point module;
and determining the face direction of the pre-detected face image according to the face main direction information and the face auxiliary direction information.
Optionally, the outputting, by the face keypoint module, the face auxiliary direction information from the detection data output by the target model includes:
generating key point coordinates of the pre-detected face image according to detection data output by the target model, wherein the key point coordinates are coordinates of left and right eyes, the middle of the two eyes and left and right mouth angles;
calculating the coordinate difference value of the average coordinates of the eyes and the mouth angle of the pre-detected face image;
judging whether the absolute value of the abscissa of the coordinate difference value is smaller than the absolute value of the ordinate of the coordinate difference value, if so, judging whether the absolute value of the abscissa of the coordinate difference value is larger than 0;
when the abscissa of the coordinate difference is larger than 0, outputting face auxiliary direction information of which the face is rotated by 90 degrees clockwise;
and when the abscissa of the coordinate difference value is not more than 0, outputting face auxiliary direction information of which the face is rotated by 270 degrees clockwise.
Optionally, after determining whether the absolute value of the abscissa of the coordinate difference is smaller than the absolute value of the ordinate of the coordinate difference, the detection method further includes:
if not, judging whether the ordinate of the coordinate difference is larger than 0;
when the vertical coordinate of the coordinate difference is larger than 0, outputting face auxiliary direction information of which the face is a forward face;
and when the ordinate of the coordinate difference is not more than 0, outputting face auxiliary direction information of which the face rotates 180 degrees clockwise.
Optionally, before the pre-detection face image is input to the target model, the detection method further includes:
processing the acquired image sample set to form a face image sample set containing face labeling information, wherein the face image sample set is an image sample set containing faces in a real scene, and the face labeling information is face direction information and face key point information;
sequentially extracting training samples from the face image sample set to be target detection samples;
and inputting the target detection sample into an initial model for training until the initial model converges, and generating a target model, wherein the initial model is a model established based on a mobilenet _ v2 network.
Optionally, before the training samples are sequentially extracted from the face image sample set as target detection samples, the detection method further includes:
performing data cleaning on the face image sample set, wherein the data cleaning comprises processing invalid image data;
and performing data enhancement on the face image sample set, wherein the data enhancement comprises the adjustment of the size, the brightness, the contrast, the hue and the saturation of the image.
Optionally, the inputting the target detection sample into an initial model for training until the initial model converges includes: extracting convolutional layer characteristics of the target detection sample through a mobilenet _ v2 network;
inputting the features of the convolutional layer into a multitask module to calculate a multitask loss value, wherein the multitask module comprises a face key point module and a face direction judging module;
generating an input number value representing the number of times the target detection sample is input into the initial model;
judging whether the multitask loss value is smaller than a preset value, if not, updating the parameters of the initial model according to the multitask loss value, inputting the target detection sample into the initial model with the updated parameters again, and performing the following steps: calculating a multitask loss value, and judging whether the multitask loss value is smaller than a preset value;
if yes, judging whether the input numerical value is equal to 1;
if not, determining that the initial model is the target model.
Optionally, after determining whether the input count value is equal to 1, the detection method further includes:
if yes, updating the parameters of the initial model according to the multitask loss value, selecting another group of training samples, marking the training samples as target detection samples, and performing the following steps: inputting an initial model, calculating a multitask loss value, and judging whether the multitask loss value is smaller than a preset value.
Optionally, the inputting the convolutional layer features into a multitask module to calculate a multitask loss value includes:
inputting the features of the convolutional layer into a face direction judgment module to calculate a face direction judgment loss value;
inputting the convolutional layer characteristics into a face key point detection module to calculate a face key point loss value;
and calculating a multitask loss value according to the face direction judgment loss value and the face key point loss value.
Optionally, the updating the parameters of the initial model according to the multitask loss value includes:
updating the multitask loss value to the parameters of the initial model by a stochastic gradient descent method.
The present application provides, in a second aspect, an apparatus for detecting a direction of a human face, including:
the first acquisition unit is used for acquiring a pre-detection face image;
the data input unit is used for inputting the pre-detected face image into a target model, and the target model is a model obtained by training key point characteristic information and direction characteristic information of images in a learning sample set;
a first output unit, configured to output, by a face direction determination module, a probability value of the detection data output by the target model, where the probability value is a probability value of a face in each direction in the pre-detected face image;
the first determining unit is used for determining the main direction information of the face according to the probability value;
the second output unit is used for outputting the detection data output by the target model to face auxiliary direction information through a face key point module;
and the second determining unit is used for determining the face direction of the pre-detected face image according to the face main direction information and the face auxiliary direction information.
Optionally, the second output unit includes:
the first generation module is used for generating key point coordinates of the pre-detected face image according to detection data output by the target model, wherein the key point coordinates are coordinates of left and right eyes, the middle of the two eyes and left and right mouth angles;
the first calculation module is used for calculating the coordinate difference value of the average coordinates of the eyes and the mouth angle of the pre-detected face image;
the first judgment module is used for judging whether the absolute value of the abscissa of the coordinate difference value is smaller than the absolute value of the ordinate of the coordinate difference value;
the second judgment module is used for judging whether the abscissa of the coordinate difference value is larger than 0 or not when the first judgment module determines that the absolute value of the abscissa of the coordinate difference value is smaller than the absolute value of the ordinate of the coordinate difference value;
the first execution module is used for outputting face auxiliary direction information of which the face rotates 90 degrees clockwise when the second judgment module determines that the abscissa of the coordinate difference is larger than 0;
and the second execution module is configured to output face auxiliary direction information that the face rotates clockwise by 270 degrees when the second determination module determines that the abscissa of the coordinate difference is not greater than 0.
Optionally, the detection apparatus further includes:
the third judgment module is used for judging whether the ordinate of the coordinate difference is greater than 0 or not when the first judgment module determines that the abscissa absolute value of the coordinate difference is greater than the ordinate absolute value of the coordinate difference;
the third execution module is used for outputting face auxiliary direction information of which the face is a forward face when the third judgment module determines that the vertical coordinate of the coordinate difference is greater than 0;
and the fourth execution module is used for outputting face auxiliary direction information of which the face rotates 180 degrees clockwise when the third judgment module determines that the ordinate of the coordinate difference is not more than 0.
Optionally, the detection apparatus further includes:
the second acquisition unit is used for processing the acquired image sample set to form a face image sample set containing face labeling information, wherein the face image sample set is an image sample set containing faces in a real scene, and the face labeling information is face direction information and face key point information;
the sample extraction unit is used for sequentially extracting training samples from the face image sample set to be target detection samples;
and the model training unit is used for inputting the target detection sample into an initial model for training until the initial model converges, and generating a target model, wherein the initial model is a model established based on a mobilenet _ v2 network.
Optionally, the detection apparatus further includes:
the data cleaning unit is used for carrying out data cleaning on the face image sample set, and the data cleaning comprises processing invalid image data;
and the data enhancement unit is used for performing data enhancement on the face image sample set, and the data enhancement comprises the adjustment of the size, the brightness, the contrast, the hue and the saturation of the image respectively.
Optionally, the model training unit includes:
the characteristic extraction module is used for extracting the convolutional layer characteristics of the target detection sample through a mobilenet _ v2 network;
the loss calculation module is used for inputting the convolutional layer characteristics into a multitask module to calculate a multitask loss value, and the multitask module comprises the face key point module and the face direction judgment module;
the frequency generation module is used for generating an input frequency value, and the input frequency value represents the frequency of inputting the target detection sample into the initial model;
the fourth judging module is used for judging whether the multitask loss value is smaller than a preset value or not;
a fifth executing module, configured to, when the fourth determining module determines that the multitask loss value is not smaller than the preset value, update the parameter of the initial model according to the multitask loss value, and input the target detection sample into the initial model after the parameter is updated again, and perform the following steps: calculating a multitask loss value, and judging whether the multitask loss value is smaller than a preset value;
a fifth judging module, configured to, when the fourth judging module determines that the multitask loss value is smaller than a preset value, judge whether the input number value is equal to 1;
a sixth executing module, configured to determine that the initial model is the target model when the fifth determining module determines that the input order value is not equal to 1;
a seventh executing module, configured to, when the fifth determining module determines that the input order value is equal to 1, update the parameters of the initial model according to a multitask loss value, select another group of training samples, mark the training samples as target detection samples, and perform the following steps: inputting an initial model, calculating a multitask loss value, and judging whether the multitask loss value is smaller than a preset value.
Optionally, the loss calculating module includes:
the direction judgment loss submodule is used for inputting the features of the convolutional layer into a face direction judgment module to calculate a face direction judgment loss value;
the key point loss submodule is used for inputting the features of the convolutional layer into a face key point detection module to calculate a face key point loss value;
and the total loss submodule is used for calculating a multitask loss value according to the face direction judgment loss value and the face key point loss value.
The present application provides, in a third aspect, an apparatus for detecting a face direction, including:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the processor specifically performs the following operations:
acquiring a pre-detected face image;
inputting the pre-detected face image into a target model, wherein the target model is obtained by training key point characteristic information and direction characteristic information of images in a learning sample set;
outputting probability values of the detection data output by the target model through a human face direction judging module, wherein the probability values are probability values of human faces in all directions in the pre-detection human face image;
determining the main direction information of the face according to the probability value;
outputting face auxiliary direction information from the detection data output by the target model through a face key point module;
and determining the face direction of the pre-detected face image according to the face main direction information and the face auxiliary direction information.
According to the technical scheme, the embodiment of the application has the following advantages:
the obtained pre-detected face image is input into a pre-trained network model, face direction information can be generated through a face direction judging module and a face key point module according to detection data output by the model, the face direction is determined, the system can adjust and recognize the face image according to the data, and therefore the purpose of face recognition can be achieved under the condition that the direction of a normal face image input into the system is irregular.
Drawings
Fig. 1 is a schematic flowchart of an embodiment of a method for detecting a face direction in an embodiment of the present application;
fig. 2-1 and 2-2 are schematic flow charts of another embodiment of a method for detecting a face direction in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an embodiment of a device for detecting a face direction in an embodiment of the present application;
fig. 4 is a schematic structural diagram of another embodiment of a device for detecting a face direction in an embodiment of the present application;
fig. 5 is a schematic structural diagram of another embodiment of a device for detecting a face direction in an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of the present invention.
The embodiment of the application provides a method and a related device for detecting the direction of a human face, which are used for achieving the purpose of recognizing the human face under the condition that a normal face picture of an input system is irregular in direction.
In this embodiment, the method for detecting the face direction may be implemented in a system, a server, or a terminal, and is not specifically limited. For convenience of description, the embodiment of the present application uses the system as an example for the execution subject.
Referring to fig. 1, in an embodiment of the present application, an embodiment of a method for detecting a face direction includes:
101. the system acquires a pre-detection face image;
in the application, the system needs to analyze the direction of the face, and the obtaining of the face image needing to analyze the direction of the face is a precondition for the execution of the application. In the present application, the process of detecting the face direction including the face image is to detect the acquired face image through a pre-trained model, and perform some corresponding processing on the generated detection data to determine the face direction of the face image.
The system can acquire the pre-detected face image containing the face in various ways, can shoot the face image through the camera in real time, can intercept the face image from the recorded video containing the face, and can search the image containing the face through the internet, and the specific details are not limited herein.
102. The system inputs the pre-detected face image into a target model;
after the system acquires a face image to be detected, the key point characteristic information and the face direction characteristic information in the image need to be identified, and then corresponding direction information is generated through a relevant module according to the key point characteristic information and the face direction characteristic information. Therefore, the system needs to input the corresponding face image into a target model which is trained in advance according to the key point features and the face direction features of the face, so that the target model outputs detection data for identifying the feature information of the face image.
103. The system outputs probability value of the detection data output by the target model through a face direction judging module;
in the application, the system needs to acquire feature information of a face direction from detection data of the face image feature information output by a target model, and then determines probability values of faces in all directions in a face image through a face direction judgment module according to the feature information of the face direction. For example: the face direction feature information output by the face image through the target model is input into the face direction judgment module, the face direction judgment module respectively judges that the face direction is a positive face probability value of 15%, a clockwise 90-degree probability value of 30%, a clockwise 180-degree probability value of 40%, a clockwise 270-degree probability value of 15% according to factors such as organ features in the face image, and the direction information of the face can be determined according to the judged probability values.
104. The system determines the main direction information of the face according to the probability value;
the system determines the main direction information of the face by comprehensively judging the probability value acquired in step 103, so that the system can comprehensively analyze and detect from multiple aspects when detecting the face direction in the face image, and the detection accuracy is improved.
For example, if the detected data obtained by the system to the face direction determination module indicates that the face direction is a positive face, the probability value is 15%, the probability value is 30% for 90 degrees clockwise, the probability value is 40% for 180 degrees clockwise, and the probability value is 15% for 270 degrees clockwise, the system sorts the four possible probabilities, and determines the probability with the largest probability value as the face main direction information, that is, the system determines the face main direction information of the image as the direction information of 180 degrees clockwise according to the probability values.
105. The system outputs the detection data output by the target model to the face auxiliary direction information through the face key point module;
the system needs to comprehensively analyze and detect the direction of the face from multiple aspects, so that the auxiliary direction information of the face needs to be acquired in addition to the main direction information of the face. The system extracts face key point characteristic information from detection data output by the target model.
For example, the target model can identify the center of the left and right eyes, the center of the nose bridge, the center of the left and right tips of the mouth and the left and right ends of the eyebrows of the face as the key point features of the face, after the feature information of the key points is input into the face key point module, the face key point module analyzes the connecting line distance and the connecting line angle between the two points of each feature point, and compares the analyzed data with the connecting line distance and the connecting line angle of the key points corresponding to the normal face, so that the direction of the face in the detected image is determined according to the comparison result to serve as face auxiliary direction information; the module can also be preset with a coordinate system, after the key point characteristic information is input into the face key point module, the module respectively determines the position coordinates of each key point, and then performs corresponding operation, comparison and judgment on the position coordinates to determine the direction of the face in the detected image to serve as face auxiliary direction information. The specific way of analyzing the face auxiliary direction information is not limited.
106. The system determines the face direction of the pre-detected face image according to the main face direction information and the auxiliary face direction information.
In the application, after the system acquires the main direction information and the auxiliary direction information of the face, the direction information of the main direction information and the auxiliary direction information of the face can be integrated to finally determine the face direction of the detected image, for example, the system acquires the direction information of which the main direction information of the face is 180 degrees clockwise through the same detection data, and the auxiliary direction information of the face is 180 degrees clockwise or the direction information of the forward face, so that the two pieces of information can be integrated to determine the face direction of the face of the detected image is 180 degrees clockwise.
In the embodiment of the application, the system inputs the acquired pre-detected face image into a pre-trained network model, so that the face key points and face direction data of the face image can be simultaneously output in real time, the system can adjust and recognize the face image according to the data, and the purpose of face recognition can be achieved under the condition that the direction of a normal face picture input into the system is irregular.
Another embodiment of the method for detecting the direction of the human face will be described in detail below with reference to fig. 2-1 and 2-2.
Referring to fig. 2-1 and 2-2, in an embodiment of the present application, another embodiment of a method for detecting a face direction includes:
201. the system acquires a pre-detection face image;
step 201 in this embodiment is similar to step 101 in the previous embodiment, and is not described herein again.
202. The system processes the acquired image sample set to form a face image sample set containing face labeling information;
the system processes an image sample set acquired under a prepared rich reality background, and manually marks direction information and face key point information, wherein the face direction is mainly divided into 0 degree, clockwise rotation is performed for 90 degrees, clockwise rotation is performed for 180 degrees, and clockwise rotation is performed for 270 degrees and 4 directions. The face key point annotation information comprises 6 key points such as left eye, right eye, middle of two eyes, nose tip, left mouth corner and right mouth corner, and therefore training basis is provided for subsequent model training.
203. The system carries out data cleaning on the face image sample set;
204. the system carries out data enhancement on the face image sample set;
in order to make the images in the image sample set more conform to the conditions of an input model and improve the detection precision of the model, data cleaning and data enhancement need to be carried out on the face image sample set, and the purpose of carrying out the data cleaning is to clean the images with poor quality in the face image sample set, such as a side face, an overexposed face and the like; the purpose of data enhancement is to enhance the feature information recognition, and the color of the image needs to be enhanced, so the system needs to adjust parameters such as brightness, contrast, hue, and saturation of the image.
205. The system sequentially extracts training samples from a face image sample set as target detection samples; 206. the system extracts the convolutional layer characteristics of the target detection sample through a mobilenet _ v2 network; 207. the system inputs the convolution layer characteristics into a face direction judgment module to calculate a face direction judgment loss value; 208. the system inputs the convolution layer characteristics into a face key point detection module to calculate a face key point loss value; 209. the system judges the loss value and the loss value of the key points of the human face according to the direction of the human face and calculates a multitask loss value; 210. the system generates an input numerical value; 211. the system judges whether the multitask loss value is smaller than a preset value; if yes, go to step 213, otherwise go to step 212; 212. the system updates the parameters of the initial model according to the multitask loss value, and inputs the target detection sample into the initial model after the parameters are updated again, and the steps are carried out: calculating a multitask loss value, and judging whether the multitask loss value is smaller than a preset value; 213. the system judges whether the input frequency value is equal to 1; if yes, go to step 214, otherwise go to step 215; 214. the system updates the parameters of the initial model according to the multitask loss value, selects another group of training samples, marks the training samples as target detection samples, and carries out the following steps: inputting an initial model, calculating a multitask loss value, and judging whether the multitask loss value is smaller than a preset value; 215. the system determines an initial model as a target model;
after the image sample set is correspondingly adjusted, an initial model frame needs to be constructed through a neural network, and then the initial model is trained to a certain extent, so that the trained target model can identify and output feature information data of the face image.
In the embodiment of the application, an initial model framework is constructed through mobilenet _ v2, and the specific process of initial model training is as follows: sequentially extracting a training sample from a face image sample set as a target detection sample, extracting the convolution layer characteristics of the detection sample, and respectively calculating a face direction loss value and a face key point loss value, wherein the direction judgment module for calculating the face direction loss value is a multi-classification module, and the face key point detection module for calculating the face key point loss value is a regression module. After a face direction loss value and a face key point loss value are obtained, a MultiTask loss value is calculated according to the face direction loss value and the face key point loss value, and a MultiTask loss function MultiTask _ loss is formed by combining a face direction judgment loss function and a face key point loss function, and is shown as a formula (1):
Figure BDA0002871294800000121
in equation (1), N is the number of input samples, LoriDetermining a loss function, L, for face directionlanIs a face key point loss function, y is a predicted value,
Figure BDA0002871294800000122
is a label value, λoriDetermining a lost weight, λ, for face directionlanWeight lost for face keypoints. In the embodiment of the present application, λ may be set in order to increase the loss weight of the keypoint regressionoriIs 1, λlanIs 2.
The face direction judgment loss function is a cross entropy loss function, and is shown as equation (2):
Figure BDA0002871294800000123
the loss function of the face key points is a winloss function, and is shown in equation (3):
Figure BDA0002871294800000124
wherein w in equation (3) is a non-negative number, the nonlinear part of the loss function is limited to the (-w, w) interval, ε is the curvature of the constraint loss function curve, C is a constant, and
Figure BDA0002871294800000125
for connecting the linear and non-linear portions of the loss function.
In particular, the method for updating the parameters of the initial model according to the multitask loss value refers to iteratively updating the model parameters by a random gradient descent method until the model converges.
In the process of training the initial model, because the situation that convergence can be achieved through the first operation may occur, in order to reduce the contingency, the number of times of training input into the initial model needs to be judged after the operation meets the convergence condition, if the system determines that the value of the input number is 1, the training parameters of the initial model need to be updated again, and new training samples are extracted from the image sample set and input into the initial model as target detection samples for training; and if the system determines that the input numerical value is not 1, determining that the initial model completes training, wherein the initial model is the target model.
216. The system inputs the pre-detected face image into a target model;
217. the system outputs probability value of the detection data output by the target model through a face direction judging module;
218. the system determines the main direction information of the face according to the probability value;
steps 216 to 218 in this embodiment are similar to steps 102 to 104 in the previous embodiment, and are not described again here.
219. Generating key point coordinates of a pre-detected face image by the system according to detection data output by the target model;
220. the system calculates the coordinate difference value of the average coordinates of the eyes and the mouth angle of the pre-detected face image;
221. the system judges whether the absolute value of the abscissa of the coordinate difference is smaller than the absolute value of the ordinate of the coordinate difference; if yes, go to step 222, otherwise go to step 225;
222. the system judges whether the abscissa of the coordinate difference value is greater than 0; if yes, go to step 223, otherwise go to step 224;
223. the system outputs face auxiliary direction information of which the face is rotated by 90 degrees clockwise;
224. the system outputs face auxiliary direction information of which the face rotates by 270 degrees clockwise;
225. the system judges whether the ordinate of the coordinate difference is greater than 0; if yes, go to step 226, otherwise go to step 227;
226. the system outputs face auxiliary direction information of which the face is a positive face;
227. the system outputs face auxiliary direction information of which the face is clockwise rotated by 180 degrees;
the system finally determines the face direction of the pre-detected face image according to the main direction information output by the face direction judging module and the auxiliary direction information output by the face key point module. Therefore, after the system determines the main direction information of the face according to the probability value, the system also needs to determine the auxiliary direction information of the face according to the key points of the face.
In the embodiment of the application, the system generates the coordinates of key points of the pre-detected face image according to the detection data output by the target model, and the coordinates of the key points comprise a left mouth corner point and a right mouth corner point besides the left eye, the right eye and the center point between the two eyes. The system calculates the average coordinate of the three points as (x) according to the left eye, the right eye and the central point between the two eyeseye_1,yeye_1) Calculating the average coordinate of the left mouth corner point and the right mouth corner point as (x)mc,ymc) Then, the coordinate difference of the two average coordinates is calculated as (dis)x,disy) Judgment of disxIs less than disyThe absolute value of (a) is,if yes, the dis is judgedxAnd if the face auxiliary direction information is not greater than 0, outputting the face auxiliary direction information of which the face is rotated by 270 degrees clockwise. If it is disxIs not less than disyIs determined, then this dis is determinedyAnd if the face auxiliary direction information is not larger than 0, outputting the face auxiliary direction information of which the face is a forward face, and if the face auxiliary direction information is not larger than 0, outputting the face auxiliary direction information of which the face is rotated by 180 degrees clockwise.
228. The system determines the face direction of the pre-detected face image according to the main face direction information and the auxiliary face direction information.
After the main direction information and the auxiliary direction information of the face are obtained, the system can comprehensively determine the face direction of the pre-detected face image according to the face direction with high probability and the direction of the key point.
In the embodiment of the application, before training samples are sequentially extracted from a face image set and input to an initial model for training, the system also performs data cleaning and data enhancement on data in the face image set, filters out poor-quality face images and enhances the characteristics such as image colors, and further improves the training speed of the initial model and the accuracy of face detection.
The above-described method for detecting the direction of the human face is described in detail, and the following apparatus for detecting the direction of the human face is described in detail.
Referring to fig. 3, in an embodiment of the present application, an embodiment of an apparatus for detecting a direction of a human face includes:
a first obtaining unit 301, configured to obtain a pre-detected face image;
a data input unit 302, configured to input a pre-detected face image into a target model, where the target model is a model obtained by training key point feature information and direction feature information of an image in a learning sample set;
a first output unit 303, configured to output, by a face direction determination module, a probability value, which is a probability value of a face in each direction in the pre-detected face image, of the detection data output by the target model;
a first determining unit 304, configured to determine the main direction information of the face according to the probability value;
a second output unit 305, configured to output the detection data output by the target model through the face keypoint module to output face auxiliary direction information;
the second determining unit 306 is configured to determine a face direction of the pre-detected face image according to the face main direction information and the face auxiliary direction information.
In the embodiment of the application, a pre-detected face image is acquired by a first acquiring unit 301, the pre-detected face image is input to a target model trained in advance by a data input unit 302 to generate detection data, then, a first output unit 303 and a second output unit 305 respectively output a probability value and face auxiliary direction information by a face direction judging module and a face key point module, then, a first determining unit 304 determines the face main direction information according to the probability value, and finally, a second determining unit 306 is used for determining the face direction of the pre-detected face image according to the face main direction information acquired by the first determining unit 304 and the face auxiliary direction information acquired by the second output unit 305, so that the face recognition can still be performed under the condition that the input system face image is irregular.
Referring to fig. 4, in an embodiment of the present application, another embodiment of a device for detecting a direction of a human face includes:
a first obtaining unit 401, configured to obtain a pre-detected face image;
a second obtaining unit 402, configured to process an obtained image sample set to form a face image sample set including face annotation information, where the face image sample set is an image sample set including a face in a real scene, and the face annotation information is face direction information and face key point information;
a data cleaning unit 403, configured to perform data cleaning on the face image sample set, where the data cleaning includes processing invalid image data;
a data enhancement unit 404, configured to perform data enhancement on the face image sample set, where the data enhancement includes adjusting the size, brightness, contrast, hue, and saturation of the image respectively;
a sample extraction unit 405, configured to sequentially extract training samples from the face image sample set as target detection samples;
and a model training unit 406, configured to input the target detection sample into the initial model for training until the initial model converges, and generate a target model, where the initial model is a model built based on the mobilenet _ v2 network.
A data input unit 407, configured to input a pre-detected face image into a target model, where the target model is a model obtained by training key point feature information and direction feature information of an image in a learning sample set;
a first output unit 408, configured to output, by a face direction determination module, a probability value of the detection data output by the target model, where the probability value is a probability value of a face in each direction in the pre-detected face image;
a first determining unit 409, configured to determine main direction information of the face according to the probability value;
a second output unit 410, configured to output the detection data output by the target model through the face key point module to output face auxiliary direction information;
the second determining unit 411 is configured to determine a face direction of the pre-detected face image according to the face main direction information and the face auxiliary direction information.
In this embodiment of the present application, the model training unit 406 includes a feature extraction module 4061, a loss calculation module 4062, a number generation module 4063, a fourth determination module 4064, a fifth determination module 4065, a fifth execution module 4066, a sixth execution module 4067, and a seventh execution module 4068.
The feature extraction module 4061 is configured to extract convolutional layer features of the target detection sample through a mobilenet _ v2 network;
the loss calculation module 4062 is configured to input the convolutional layer characteristics into the multitask module to calculate a multitask loss value;
the frequency generation module 4063 is used for generating an input frequency value;
the fourth judging module 4064 is configured to judge whether the multitask loss value is smaller than a preset value;
the fifth judging module 4065 is configured to, when the fourth judging module 4064 determines that the multitask loss value is smaller than the preset value, judge whether the input number is equal to 1;
the fifth executing module 4066 is configured to, when the fourth determining module 4064 determines that the multitask loss value is not less than the preset value, update the parameter of the initial model according to the multitask loss value, re-input the target detection sample into the initial model after the parameter is updated, and perform the following steps: calculating a multitask loss value, and judging whether the multitask loss value is smaller than a preset value;
the sixth executing module 4067 is configured to determine that the initial model is the target model when the fifth determining module 4065 determines that the input order value is not equal to 1;
the seventh executing module 4068 is configured to, when the fifth determining module 4065 determines that the input number is equal to 1, update the parameters of the initial model according to the multitask loss value, select another group of training samples, mark the training samples as target detection samples, and perform the following steps: inputting an initial model, calculating a multitask loss value, and judging whether the multitask loss value is smaller than a preset value.
Further, loss calculation module 4062 may include a direction determination loss sub-module 40621, a keypoint loss sub-module 40622, and a total loss sub-module 40623.
A direction judgment loss submodule 40621, configured to input the convolution layer characteristics into the face direction judgment module to calculate a face direction judgment loss value;
a key point loss submodule 40622, configured to input the convolutional layer features into the face key point detection module to calculate a face key point loss value;
and the total loss submodule 40623 is used for calculating a multitask loss value according to the face direction judgment loss value and the face key point loss value.
In the embodiment of the present application, the second output unit 410 may include a first generation module 4101, a first calculation module 4102, a first judgment module 4103, a third judgment module 4104, a third execution module 4105, a fourth execution module 4106, a second judgment module 4107, a first execution module 4108, and a second execution module 4109.
The first generation module 4101 is configured to generate a key point coordinate of a pre-detected face image according to detection data output by the target model;
the first calculation module 4102 is configured to calculate a coordinate difference between an eye and a mouth angle average coordinate of a pre-detected face image;
the first judging module 4103 is configured to judge whether an absolute value of an abscissa of the coordinate difference is smaller than an absolute value of a ordinate of the coordinate difference;
the third judging module 4104 is configured to judge whether the ordinate of the coordinate difference is greater than 0 when the first judging module 4103 determines that the abscissa absolute value of the coordinate difference is greater than the ordinate absolute value of the coordinate difference;
the third executing module 4105 is configured to, when the third determining module 4104 determines that the ordinate of the coordinate difference is greater than 0, output face auxiliary direction information that the face is a forward face;
the fourth executing module 4106 is configured to, when the third determining module 4104 determines that the ordinate of the coordinate difference is not greater than 0, output face auxiliary direction information that the face is rotated clockwise by 180 degrees;
the second judging module 4107 is configured to judge whether the abscissa of the coordinate difference is greater than 0 when the first judging module 4103 determines that the absolute value of the abscissa of the coordinate difference is smaller than the absolute value of the ordinate of the coordinate difference;
the first executing module 4108 is configured to, when the second determining module 4107 determines that the abscissa of the coordinate difference is greater than 0, output face auxiliary direction information that the face is rotated by 90 degrees clockwise;
the second performing module 4109 is configured to output face auxiliary direction information that the face is rotated clockwise by 270 degrees when the second determining module 4107 determines that the abscissa of the coordinate difference is not greater than 0.
In the above embodiment, the functions of each unit and each module correspond to the steps in the embodiment shown in fig. 2, and are not described herein again.
Referring to fig. 5, a detailed description is given below of a device for detecting a face direction in an embodiment of the present application, where another embodiment of the device for detecting a face direction in an embodiment of the present application includes:
a processor 501, a memory 502, an input/output unit 503, and a bus 504;
the processor 501 is connected to the memory 502, the input/output unit 503, and the bus 504;
the processor 501 specifically executes the following operations:
acquiring a pre-detected face image;
inputting a pre-detected face image into a target model;
outputting probability value of the detection data output by the target model through a face direction judging module;
determining the main direction information of the face according to the probability value;
outputting face auxiliary direction information from the detection data output by the target model through a face key point module;
and determining the face direction of the pre-detected face image according to the main face direction information and the auxiliary face direction information.
In this embodiment, the functions of the processor 501 correspond to the steps in the embodiments described in fig. 1 to fig. 4, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.

Claims (10)

1. A method for detecting the direction of a human face is characterized by comprising the following steps:
acquiring a pre-detected face image;
inputting the pre-detected face image into a target model, wherein the target model is obtained by training key point characteristic information and direction characteristic information of images in a learning sample set;
outputting probability values of the detection data output by the target model through a human face direction judging module, wherein the probability values are probability values of human faces in all directions in the pre-detection human face image;
determining the main direction information of the face according to the probability value;
outputting face auxiliary direction information from the detection data output by the target model through a face key point module;
and determining the face direction of the pre-detected face image according to the face main direction information and the face auxiliary direction information.
2. The detection method according to claim 1, wherein outputting the face auxiliary direction information from the detection data output by the target model through a face key point module comprises:
generating key point coordinates of the pre-detected face image according to detection data output by the target model, wherein the key point coordinates are coordinates of left and right eyes, the middle of the two eyes and left and right mouth angles;
calculating the coordinate difference value of the average coordinates of the eyes and the mouth angle of the pre-detected face image;
judging whether the absolute value of the abscissa of the coordinate difference value is smaller than the absolute value of the ordinate of the coordinate difference value, if so, judging whether the absolute value of the abscissa of the coordinate difference value is larger than 0;
when the abscissa of the coordinate difference is larger than 0, outputting face auxiliary direction information of which the face is rotated by 90 degrees clockwise;
and when the abscissa of the coordinate difference value is not more than 0, outputting face auxiliary direction information of which the face is rotated by 270 degrees clockwise.
3. The detecting method according to claim 2, wherein after determining whether an absolute value of an abscissa of the coordinate difference value is smaller than an absolute value of an ordinate of the coordinate difference value, the detecting method further comprises:
if not, judging whether the ordinate of the coordinate difference is larger than 0;
when the vertical coordinate of the coordinate difference is larger than 0, outputting face auxiliary direction information of which the face is a forward face;
and when the ordinate of the coordinate difference is not more than 0, outputting face auxiliary direction information of which the face rotates 180 degrees clockwise.
4. The detection method according to claim 3, wherein before inputting the pre-detected face image into a target model, the detection method further comprises:
processing the acquired image sample set to form a face image sample set containing face labeling information, wherein the face image sample set is an image sample set containing faces in a real scene, and the face labeling information is face direction information and face key point information;
sequentially extracting training samples from the face image sample set to be target detection samples;
and inputting the target detection sample into an initial model for training until the initial model converges, and generating a target model, wherein the initial model is a model established based on a mobilenet _ v2 network.
5. The detection method according to claim 4, wherein before the training samples are sequentially extracted from the face image sample set as target detection samples, the detection method further comprises:
performing data cleaning on the face image sample set, wherein the data cleaning comprises processing invalid image data;
and performing data enhancement on the face image sample set, wherein the data enhancement comprises the adjustment of the size, the brightness, the contrast, the hue and the saturation of the image.
6. The detection method according to claim 5, wherein the training of the target detection sample input to an initial model until the initial model reaches convergence comprises:
extracting convolutional layer characteristics of the target detection sample through a mobilenet _ v2 network;
inputting the features of the convolutional layer into a multitask module to calculate a multitask loss value, wherein the multitask module comprises a face key point module and a face direction judging module;
generating an input number value representing the number of times the target detection sample is input into the initial model;
judging whether the multitask loss value is smaller than a preset value, if not, updating the parameters of the initial model according to the multitask loss value, inputting the target detection sample into the initial model with the updated parameters again, and performing the following steps: calculating a multitask loss value, and judging whether the multitask loss value is smaller than a preset value;
if yes, judging whether the input numerical value is equal to 1;
if not, determining that the initial model is the target model.
7. The method of claim 6, wherein after determining whether the input count value is equal to 1, the method further comprises:
if yes, updating the parameters of the initial model according to the multitask loss value, selecting another group of training samples, marking the training samples as target detection samples, and performing the following steps: inputting an initial model, calculating a multitask loss value, and judging whether the multitask loss value is smaller than a preset value.
8. The inspection method of claim 7, wherein said inputting said convolutional layer features into a multitasking module to calculate a multitasking loss value comprises:
inputting the features of the convolutional layer into a face direction judgment module to calculate a face direction judgment loss value;
inputting the convolutional layer characteristics into a face key point detection module to calculate a face key point loss value;
and calculating a multitask loss value according to the face direction judgment loss value and the face key point loss value.
9. The method according to any one of claims 1 to 8, wherein the updating the parameters of the initial model according to the multitasking loss value comprises:
updating the multitask loss value to the parameters of the initial model by a stochastic gradient descent method.
10. An apparatus for detecting a direction of a human face, comprising:
the first acquisition unit is used for acquiring a pre-detection face image;
the data input unit is used for inputting the pre-detected face image into a target model, and the target model is a model obtained by training key point characteristic information and direction characteristic information of images in a learning sample set;
a first output unit, configured to output, by a face direction determination module, a probability value of the detection data output by the target model, where the probability value is a probability value of a face in each direction in the pre-detected face image;
the first determining unit is used for determining the main direction information of the face according to the probability value;
the second output unit is used for outputting the detection data output by the target model to face auxiliary direction information through a face key point module;
and the second determining unit is used for determining the face direction of the pre-detected face image according to the face main direction information and the face auxiliary direction information.
CN202011610824.2A 2020-12-30 2020-12-30 Face direction detection method and related device Active CN112633221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011610824.2A CN112633221B (en) 2020-12-30 2020-12-30 Face direction detection method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011610824.2A CN112633221B (en) 2020-12-30 2020-12-30 Face direction detection method and related device

Publications (2)

Publication Number Publication Date
CN112633221A true CN112633221A (en) 2021-04-09
CN112633221B CN112633221B (en) 2024-08-09

Family

ID=75286694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011610824.2A Active CN112633221B (en) 2020-12-30 2020-12-30 Face direction detection method and related device

Country Status (1)

Country Link
CN (1) CN112633221B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095336A (en) * 2021-04-22 2021-07-09 北京百度网讯科技有限公司 Method for training key point detection model and method for detecting key points of target object
CN113191322A (en) * 2021-05-24 2021-07-30 口碑(上海)信息技术有限公司 Method and device for detecting skin of human face, storage medium and computer equipment
CN113313034A (en) * 2021-05-31 2021-08-27 平安国际智慧城市科技股份有限公司 Face recognition method and device, electronic equipment and storage medium
CN113674230A (en) * 2021-08-10 2021-11-19 深圳市捷顺科技实业股份有限公司 Method and device for detecting key points of indoor backlight face
CN118552997A (en) * 2024-06-13 2024-08-27 广东机电职业技术学院 Student class state assessment method based on deep neural network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101986328A (en) * 2010-12-06 2011-03-16 东南大学 Local descriptor-based three-dimensional face recognition method
CN105095829A (en) * 2014-04-29 2015-11-25 华为技术有限公司 Face recognition method and system
CN109214343A (en) * 2018-09-14 2019-01-15 北京字节跳动网络技术有限公司 Method and apparatus for generating face critical point detection model
CN110059637A (en) * 2019-04-22 2019-07-26 上海云从企业发展有限公司 A kind of detection method and device of face alignment
US20200042776A1 (en) * 2018-08-03 2020-02-06 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for recognizing body movement
CN110796029A (en) * 2019-10-11 2020-02-14 北京达佳互联信息技术有限公司 Face correction and model training method and device, electronic equipment and storage medium
CN111274848A (en) * 2018-12-04 2020-06-12 北京嘀嘀无限科技发展有限公司 Image detection method and device, electronic equipment and storage medium
CN111898407A (en) * 2020-06-06 2020-11-06 东南大学 Human-computer interaction operating system based on human face action recognition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101986328A (en) * 2010-12-06 2011-03-16 东南大学 Local descriptor-based three-dimensional face recognition method
CN105095829A (en) * 2014-04-29 2015-11-25 华为技术有限公司 Face recognition method and system
US20200042776A1 (en) * 2018-08-03 2020-02-06 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for recognizing body movement
CN109214343A (en) * 2018-09-14 2019-01-15 北京字节跳动网络技术有限公司 Method and apparatus for generating face critical point detection model
CN111274848A (en) * 2018-12-04 2020-06-12 北京嘀嘀无限科技发展有限公司 Image detection method and device, electronic equipment and storage medium
CN110059637A (en) * 2019-04-22 2019-07-26 上海云从企业发展有限公司 A kind of detection method and device of face alignment
CN110796029A (en) * 2019-10-11 2020-02-14 北京达佳互联信息技术有限公司 Face correction and model training method and device, electronic equipment and storage medium
CN111898407A (en) * 2020-06-06 2020-11-06 东南大学 Human-computer interaction operating system based on human face action recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RAJEEV RANJAN: "A fast and accurate system for face detection, identification, and verification", 《IEEE》, 2 April 2019 (2019-04-02), pages 82 - 96, XP011720004, DOI: 10.1109/TBIOM.2019.2908436 *
申建坤: "基于深度学习的轻量且高效的人脸识别算法研究及系统设计", 《中国优秀硕士学位论文全文数据库》, 15 July 2020 (2020-07-15), pages 138 - 1171 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095336A (en) * 2021-04-22 2021-07-09 北京百度网讯科技有限公司 Method for training key point detection model and method for detecting key points of target object
CN113095336B (en) * 2021-04-22 2022-03-11 北京百度网讯科技有限公司 Method for training key point detection model and method for detecting key points of target object
CN113191322A (en) * 2021-05-24 2021-07-30 口碑(上海)信息技术有限公司 Method and device for detecting skin of human face, storage medium and computer equipment
CN113313034A (en) * 2021-05-31 2021-08-27 平安国际智慧城市科技股份有限公司 Face recognition method and device, electronic equipment and storage medium
CN113313034B (en) * 2021-05-31 2024-03-22 平安国际智慧城市科技股份有限公司 Face recognition method and device, electronic equipment and storage medium
CN113674230A (en) * 2021-08-10 2021-11-19 深圳市捷顺科技实业股份有限公司 Method and device for detecting key points of indoor backlight face
CN113674230B (en) * 2021-08-10 2023-12-19 深圳市捷顺科技实业股份有限公司 Method and device for detecting key points of indoor backlight face
CN118552997A (en) * 2024-06-13 2024-08-27 广东机电职业技术学院 Student class state assessment method based on deep neural network

Also Published As

Publication number Publication date
CN112633221B (en) 2024-08-09

Similar Documents

Publication Publication Date Title
Chen et al. Fsrnet: End-to-end learning face super-resolution with facial priors
CN112950581B (en) Quality evaluation method and device and electronic equipment
CN112633221A (en) Face direction detection method and related device
US20170161591A1 (en) System and method for deep-learning based object tracking
CN108463823B (en) Reconstruction method and device of user hair model and terminal
CN110472494A (en) Face feature extracts model training method, facial feature extraction method, device, equipment and storage medium
JP2015176169A (en) Image processor, image processing method and program
CN109948476B (en) Human face skin detection system based on computer vision and implementation method thereof
CN109711268B (en) Face image screening method and device
CN109559362B (en) Image subject face replacing method and device
CN112200056B (en) Face living body detection method and device, electronic equipment and storage medium
CN111680544B (en) Face recognition method, device, system, equipment and medium
CN111027450A (en) Bank card information identification method and device, computer equipment and storage medium
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN112836625A (en) Face living body detection method and device and electronic equipment
CN110543848B (en) Driver action recognition method and device based on three-dimensional convolutional neural network
CN110059607B (en) Living body multiplex detection method, living body multiplex detection device, computer equipment and storage medium
CN109410138B (en) Method, device and system for modifying double chin
CN111784658B (en) Quality analysis method and system for face image
CN112766065A (en) Mobile terminal examinee identity authentication method, device, terminal and storage medium
CN109165551B (en) Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics
CN110781712A (en) Human head space positioning method based on human face detection and recognition
CN117496019B (en) Image animation processing method and system for driving static image
CN109508660A (en) A kind of AU detection method based on video
CN113743378A (en) Fire monitoring method and device based on video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant