CN112906446A - Face detection method and device, electronic equipment and computer readable storage medium - Google Patents
Face detection method and device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN112906446A CN112906446A CN201911232487.5A CN201911232487A CN112906446A CN 112906446 A CN112906446 A CN 112906446A CN 201911232487 A CN201911232487 A CN 201911232487A CN 112906446 A CN112906446 A CN 112906446A
- Authority
- CN
- China
- Prior art keywords
- face
- feature
- detected
- sub
- face detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 157
- 238000012549 training Methods 0.000 claims abstract description 97
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 65
- 238000012545 processing Methods 0.000 claims abstract description 31
- 238000000034 method Methods 0.000 claims abstract description 24
- 210000001747 pupil Anatomy 0.000 claims description 31
- 238000004590 computer program Methods 0.000 claims description 24
- 238000013528 artificial neural network Methods 0.000 claims description 12
- 238000002372 labelling Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 10
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 8
- 230000004927 fusion Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 7
- 230000009286 beneficial effect Effects 0.000 description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The application provides a face detection method, a face detection device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a face image to be detected of a target object; inputting the face image to be detected into a first sub-network of a pre-training face detection convolutional neural network model to extract a first feature; inputting the first characteristic into a second sub-network of the face detection convolutional neural network model for angle prediction to obtain the angle of plane rotation of the face in the face image to be detected; and inputting the first feature into a third sub-network of the face detection convolutional neural network model for processing to obtain a third feature, and performing angle prediction on the extracted second feature and the third feature according to the second sub-network to obtain the position of the face in the face image to be detected. According to the embodiment of the application, the rotation angle of the face in the plane in the face image to be detected can be predicted while the face is detected, and resource overhead is reduced.
Description
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for face detection, an electronic device, and a computer-readable storage medium.
Background
In recent years, artificial intelligence, which is one of several most advanced technologies, has been rapidly developed, and research results thereof have been widely applied to aspects of social life, and common face detection technologies, such as face recognition, database face feature information storage, face search, and the like, have no body shadow. In the face recognition process, the decoders of the imaging systems are different, so that the face images rotate on the plane in the decoding process, the face images acquired by the imaging equipment are not all in the vertical direction, and the accuracy of the face recognition is influenced to the face which rotates on the plane and does not rotate in the vertical direction. In order to improve the accuracy of face recognition under such conditions, in the prior art, a network model is usually trained additionally to predict the angle information of the face, so as to facilitate the subsequent face correction work, but this increases the resource overhead invisibly.
Disclosure of Invention
In view of the above problems, the present application provides a face detection method, a face detection device, an electronic device, and a computer-readable storage medium, which can predict angle information of planar rotation of a face in a face image to be detected while detecting the face, and are beneficial to reducing resource overhead.
In order to achieve the above object, a first aspect of the embodiments of the present application provides a face detection method, including:
acquiring a face image to be detected of a target object;
inputting the face image to be detected into a first sub-network of a pre-training face detection convolutional neural network model to extract a first feature;
inputting the first characteristic into a second sub-network of the face detection convolutional neural network model for angle prediction to obtain the angle of plane rotation of the face in the face image to be detected;
and inputting the first feature into a third sub-network of the face detection convolutional neural network model for processing to obtain a third feature, and performing angle prediction on the extracted second feature and the third feature according to the second sub-network to obtain the position of the face in the face image to be detected.
As an optional implementation manner, the inputting the first feature into the second sub-network of the face detection convolutional neural network model to perform angle prediction, so as to obtain an angle of planar rotation of a face in the face image to be detected, includes:
carrying out convolution processing on the first characteristic by utilizing a plurality of convolution layers to obtain a second characteristic;
classifying the second characteristics by using a full-connection layer to position the positions of two pupils of the face in the face image to be detected;
based on the positions of the two pupils, calculating an included angle between a connecting line of the two pupils and a horizontal line so as to obtain a plane rotation angle of the face in the face image to be detected.
As an optional implementation manner, the obtaining the position of the face in the face image to be detected according to the second feature and the third feature extracted by angle prediction performed by the second sub-network includes:
fusing the second feature and the third feature to obtain a fourth feature containing angle information;
and performing face detection by using the fourth feature to obtain the position of the face in the face image to be detected.
As an optional implementation manner, the performing face detection by using the fourth feature to obtain the position of the face in the face image to be detected includes:
processing the fourth feature by using a plurality of residual blocks, and outputting a first point coordinate and a second point coordinate of the face bounding box and the height of the face bounding box;
and obtaining the position of the face in the face image to be detected based on the first point coordinate, the second point coordinate and the height.
As an optional implementation manner, after obtaining the position of the face in the face image to be detected based on the first point coordinate, the second point coordinate, and the height, the method further includes:
and performing affine transformation by using the first point coordinates, the second point coordinates, the height and the plane rotation angle of the face in the face image to be detected so as to correct the face in the face image to be detected.
As an optional implementation, the training process of the face detection convolutional neural network model includes:
carrying out primary labeling on a face image for training to obtain an initial face detection training data set;
performing re-labeling on the face image in the initial face detection training data set to obtain a target face detection training data set;
training a third convolution block of a preset convolution neural network by using the initial face detection training data set to obtain a third sub-network;
training a second convolution block of a preset convolution neural network by using the target face detection training data set to obtain a second sub-network;
and inputting the target face detection training data set into a preset convolutional neural network for integral training to obtain the face detection convolutional neural network model.
A second aspect of the embodiments of the present application provides a face detection apparatus, including:
the image acquisition module is used for acquiring a face image to be detected of the target object;
the first feature extraction module is used for inputting the face image to be detected into a first sub-network of a pre-training face detection convolutional neural network model to extract a first feature;
the angle prediction module is used for inputting the first characteristic into a second sub-network of the face detection convolutional neural network model to carry out angle prediction so as to obtain the angle of plane rotation of the face in the face image to be detected;
and the position prediction module is used for inputting the first feature into a third sub-network of the face detection convolutional neural network model for processing to obtain a third feature, and performing angle prediction on the extracted second feature and the third feature according to the second sub-network to obtain the position of the face in the face image to be detected.
A third aspect of embodiments of the present application provides an electronic device, including: the face detection method comprises a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the steps of the face detection method.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned face detection method.
The above scheme of the present application includes at least the following beneficial effects: acquiring a face image to be detected of a target object; inputting the face image to be detected into a first sub-network of a pre-training face detection convolutional neural network model to extract a first feature; inputting the first characteristic into a second sub-network of the face detection convolutional neural network model for angle prediction to obtain the angle of plane rotation of the face in the face image to be detected; and inputting the first feature into a third sub-network of the face detection convolutional neural network model for processing to obtain a third feature, and performing angle prediction on the extracted second feature and the third feature according to the second sub-network to obtain the position of the face in the face image to be detected. When the face detection convolutional neural network model provided by the application is used for detecting a face image to be detected, a first sub-network (a shared convolutional layer) is firstly adopted for carrying out shared feature extraction to obtain a first feature, the parameter reuse rate is high, two branches are connected behind the first sub-network, one branch (a second sub-network) is used for carrying out angle prediction on a face in the face image to be detected, the other branch (a third sub-network) is used for predicting the face position in the face image to be detected by combining the output of the second sub-network, the face angle feature is fused in the feature for predicting the face position, when the face position in the face image to be detected is detected, the plane rotation angle of the face is further provided, and the resource overhead brought by the fact that the network model is independently trained to predict the plane rotation angle of the face is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a network architecture according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a face detection method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of another face detection method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a second sub-network according to an embodiment of the present disclosure;
FIG. 5-a is a diagram illustrating pupil location and angle calculation according to an embodiment of the present disclosure;
5-b is an exemplary diagram of an angle of rotation of a human face in a plane according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a third sub-network according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a convolutional neural network according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a face detection apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of another face detection apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of another face detection apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of another face detection apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of another face detection apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "comprising" and "having," and any variations thereof, as appearing in the specification, claims and drawings of this application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
First, a network system architecture to which the solution of the embodiments of the present application may be applied will be described by way of example with reference to the accompanying drawings. Referring to fig. 1, fig. 1 is a schematic diagram of a network architecture provided in an embodiment of the present application, as shown in fig. 1, including an image capturing device, a user terminal, and a server, where the image capturing device includes but is not limited to an access control device, a camera, and a snapshot machine, and may be set in any place for image capturing, for example: the gate machine passage of the residential area, the entrance of a residential area building, the entrance of a construction site, the intersection of traffic lights, various business overtaken and the like. The user terminal includes, but is not limited to, a desktop computer, a laptop computer, a tablet computer, a mobile phone, and an Internet Protocol Television (IPTV), and is specifically configured to provide a display window or an interactive interface, display an image acquired by the image acquisition device, display a face detection result and a face recognition result of the server, and enable a worker to interact with the server. The server can be a single server or a server cluster and is used for detecting or processing the image acquired by the image acquisition equipment, receiving an operation instruction of the user terminal, executing related operation and displaying the result of the detection, the processing or the operation on a display window of the user terminal. All components of the whole network architecture are interconnected through a wired or wireless network for communication, so that the face detection method provided by the application can be implemented.
Based on the network architecture shown in fig. 1, the following describes in detail a face detection method provided in the embodiment of the present application with reference to other drawings.
Referring to fig. 2, fig. 2 is a schematic flow chart of a face detection method according to an embodiment of the present application, as shown in fig. 2, including the steps of:
and S21, acquiring the face image to be detected of the target object.
In the embodiment of the present application, the target object may be a resident who wants to pass through a gate passage of a cell, may be a constructor who wants to enter a construction site, and may be any object within a camera acquisition range. Specifically, an image acquisition range may be set in a relevant scene, for example, an image acquisition range of 2 meters is set in front of a gate passageway of a cell gate, and when a target object enters the image acquisition range, the image acquisition device is triggered to acquire a face image of the target object as a face image to be detected, and the acquired face image to be detected is sent to a server in real time.
S22, inputting the face image to be detected into a first sub-network of the pre-training face detection convolutional neural network model to extract a first feature.
And S23, inputting the first characteristic into a second sub-network of the face detection convolutional neural network model for angle prediction to obtain the angle of plane rotation of the face in the face image to be detected.
S24, inputting the first feature into a third sub-network of the face detection convolutional neural network model for processing to obtain a third feature, and performing angle prediction on the extracted second feature and the third feature according to the second sub-network to obtain the position of the face in the face image to be detected.
In this embodiment of the present application, the structure of the face detection neural network model mainly includes a first sub-network, a second sub-network, and a third sub-network, where the first sub-network is a shared network layer of the second sub-network and the third sub-network, and mainly performs convolution operation to extract low-level shared features of any acquired face image, where the first feature is a shared feature extracted by the first sub-network. The second sub-network and the third sub-network are two branches behind the first sub-network, the input of the second sub-network is the first feature extracted by the first sub-network, and the angle of the face in the inputted arbitrary face image can be predicted through the processing of the internal network layer, the input of the third sub-network is the first feature extracted by the first sub-network, and the output is the related information of the face bounding box in the inputted arbitrary face image through the processing of the internal network layer, for example: the position of the face can be determined according to the upper left corner coordinate and the lower right corner coordinate of the face boundary box, the upper left corner coordinate of the face boundary box, the width and the height of the face boundary box, or the upper left corner coordinate and the upper right corner coordinate of the face boundary box and the height of the face boundary box.
After receiving a face image to be detected acquired by image acquisition equipment, a server inputs the face image to be detected into a trained face detection convolutional neural network model, a first sub-network is responsible for extracting first features, the extracted first features are simultaneously used as the input of a second sub-network and a third sub-network, the second sub-network obtains second features by performing convolution and other processing on the first features, the angle of the face rotating on the plane is predicted by using the second features, when the third sub-network performs face detection, the angle features of the second sub-network are fused in addition to the first features, namely the second features are extracted, and finally the angle of the face rotating on the plane and the position of the face in the face image to be detected are respectively obtained by the second sub-network and the third sub-network.
It can be seen that, in the embodiment of the application, the face image to be detected of the target object is obtained; inputting the face image to be detected into a first sub-network of a pre-training face detection convolutional neural network model to extract a first feature; inputting the first characteristic into a second sub-network of the face detection convolutional neural network model for angle prediction to obtain the angle of plane rotation of the face in the face image to be detected; and inputting the first feature into a third sub-network of the face detection convolutional neural network model for processing to obtain a third feature, and performing angle prediction on the extracted second feature and the third feature according to the second sub-network to obtain the position of the face in the face image to be detected. When the face detection convolutional neural network model provided by the application is used for detecting a face image to be detected, a first sub-network (a shared convolutional layer) is firstly adopted for carrying out shared feature extraction to obtain a first feature, the parameter reuse rate is high, two branches are connected behind the first sub-network, one branch (a second sub-network) is used for carrying out angle prediction on a face in the face image to be detected, the other branch (a third sub-network) is used for predicting the face position in the face image to be detected by combining the output of the second sub-network, the face angle feature is fused in the feature for predicting the face position, when the face position in the face image to be detected is detected, the plane rotation angle of the face is further provided, and the resource overhead brought by the fact that the network model is independently trained to predict the plane rotation angle of the face is reduced.
Referring to fig. 3, fig. 3 is a schematic flow chart of another face detection method according to an embodiment of the present application, and as shown in fig. 3, the method includes the steps of:
s31, acquiring a face image to be detected of the target object;
s32, inputting the face image to be detected into a first sub-network of a pre-training face detection convolutional neural network model to extract a first feature;
s33, inputting the first feature into a second sub-network of the face detection convolutional neural network model, and performing convolution processing on the first feature by utilizing a plurality of convolution layers to obtain a second feature;
s34, classifying the second features by using a full connection layer to position the positions of two pupils of the face in the face image to be detected;
s35, calculating an included angle between a connecting line of the two pupils and a horizontal line based on the positions of the two pupils to obtain a plane rotation angle of the face in the face image to be detected;
s36, inputting the first feature into a third sub-network of the face detection convolutional neural network model for processing to obtain a third feature, and obtaining the position of the face in the face image to be detected according to the second feature and the third feature.
In this embodiment of the application, a structure of the second sub-network may be as shown in fig. 4, which includes a plurality of convolutional layers (only 2 are shown in fig. 4), and finally connects to a full-link layer FC, where the second sub-network firstly uses the plurality of convolutional layers to convolve the input first feature to obtain a second feature, and then inputs the second feature into the full-link layer to classify, so as to locate positions of two pupils of a face in a face image to be detected, and then, as shown in fig. 5-a, with the position of a pupil of a left eye as a starting point, calculates an included angle between a position connecting line of the two pupils and a horizontal line, so as to obtain an angle of plane rotation of the face in the face image to be detected, as shown in fig. 5-b, an example diagram of plane rotation of the face by 0 °, 90 °, 180 °, and 270 ° is given, and the actually predicted angle may be from 0 ° to.
The above steps are described in relation to the embodiment shown in fig. 2, and may achieve the same or similar beneficial effects, and are not repeated here to avoid repetition.
Therefore, in the embodiment shown in fig. 3, the positions of the two pupils of the face in the face image to be detected are located through the second sub-network, and the included angle between the connecting line of the two pupils and the horizontal line is used as the angle of the plane rotation of the face, so that the difficulty of directly predicting the angle of the plane rotation of the face is reduced, and the accuracy is ensured.
As an optional implementation manner, the obtaining the position of the face in the face image to be detected according to the second feature and the third feature extracted by angle prediction performed by the second sub-network includes:
fusing the second feature and the third feature to obtain a fourth feature containing angle information;
and performing face detection by using the fourth feature to obtain the position of the face in the face image to be detected.
Specifically, the performing face detection by using the fourth feature to obtain the position of the face in the face image to be detected includes:
processing the fourth feature by using a plurality of residual blocks, and outputting a first point coordinate and a second point coordinate of the face bounding box and the height of the face bounding box;
and obtaining the position of the face in the face image to be detected based on the first point coordinate, the second point coordinate and the height.
In a specific embodiment of the present application, the third feature is a feature obtained by processing the first feature using a third sub-network, the third sub-network uses the structure of the residual error network for reference, specifically, as shown in fig. 6, the third sub-network includes a plurality of convolutional layers (only 7 are shown in fig. 6), each two convolutional layers and one shortcut link form one residual error block, and the shortcut link, that is, a specific implementation of identity mapping in the residual error network in the structure diagram, skips one or more layers, simply performs identity mapping, and adds their outputs to the output of the superimposed convolutional layer. The whole third sub-network uses the residual block as a target detection task, and when the human face detection is performed, the third feature and the second feature are fused to obtain a fourth feature containing angle information, and the residual block shown in fig. 6 mainly uses the fourth feature to perform the human face detection. When the face position of the face image used for training is predicted, the coordinates (x, y) of the upper left corner of the face boundary box, the coordinates (x1, y1) of the upper right corner of the face boundary box and the height h of the face boundary box are output, so that the first point coordinate obtained here is the coordinate of the upper left corner of the face boundary box, the second point coordinate is the coordinate of the upper right corner of the face boundary box, and the complete face boundary box can be obtained based on the first point coordinate, the second point coordinate and the height h of the face boundary box, so that the position of the face in the face image to be detected is determined, and of course, the positions of the face in the face image to be detected can also be determined by outputting the coordinates (x, y) of the upper left corner and the coordinates (x2, y2) of the face boundary box.
In the embodiment, the structure of the residual error network is used for reference, so that the structure of the second sub-network is deeper, richer features can be extracted, the richer features are utilized to perform the target detection task, and the output result is better.
As an optional implementation manner, after obtaining the position of the face in the face image to be detected based on the first point coordinate, the second point coordinate, and the height, the method further includes:
and performing affine transformation by using the first point coordinates, the second point coordinates, the height and the plane rotation angle of the face in the face image to be detected so as to correct the face in the face image to be detected.
In the embodiment of the present application, because the third sub-network fuses the second feature, that is, the angle feature extracted when the second sub-network performs angle prediction, when performing face detection, the obtained face bounding box is an oblique box, which can well cover faces at different angles. In order to facilitate subsequent processing, the face in the face image to be detected needs to be corrected, and specifically, an affine transformation matrix can be constructed by using the first point coordinate, the second point coordinate, the height and the angle of the face rotating in the plane to correct the face in the face image to be detected.
In the embodiment, after the face bounding box is output, the face bounding box and the angle of the face predicted by the second sub-network in the plane rotation are combined, the face in the image to be detected is corrected through affine transformation, so that the straightened face is obtained, and the accuracy of subsequent face recognition or face matching is favorably improved.
As an optional implementation, the training process of the face detection convolutional neural network model includes:
carrying out primary labeling on a face image for training to obtain an initial face detection training data set;
performing re-labeling on the face image in the initial face detection training data set to obtain a target face detection training data set;
in this embodiment of the present application, the face image used for training may be a face image in a local database, for example: the database in a certain area usually stores face images of all residents living in a cell, a large building company usually stores face images of all managers and constructors, and the face images used for training may also be face images in some open source databases, for example: a FERET face database, an MIT face database, an ORL face database, etc. The above-mentioned annotating the face image for training mainly adopts the mode of artifical mark, marks out the position of this face in the face image for training and the angle of people's face, because the angle degree of difficulty of direct annotation people's face is great, the mark of the angle of people's face is realized through the position of two pupils in the annotation people's face to this application embodiment.
Marking the position of the face in the face image for training for the first time to obtain an initial face detection training data set, then marking the face image for training which is marked for the first time, marking the positions of two pupils of the face, and obtaining a target face detection training data set which can be used for training a preset convolutional neural network. The two branches of the face detection convolutional neural network model can be trained conveniently while one of the branches is trained, so that the training time is saved, and the training efficiency of the face detection convolutional neural network model is relatively improved.
Training a third convolution block of a preset convolution neural network by using the initial face detection training data set to obtain a third sub-network;
in the specific embodiment of the present application, as shown in fig. 7, the preset convolutional neural network mainly includes a first convolutional block, a second convolutional block, and a third convolutional block, and as can be seen from the embodiment shown in fig. 2, the first convolutional block is a shared convolutional block of the second convolutional block and the third convolutional block, and is used to extract shared features required by the second convolutional block and the third convolutional block, and the second convolutional block and the third convolutional block are two branches after the first convolutional block. The convolution layer in the first convolution block can be flexibly set according to actual conditions, such as 8 convolution layers, 10 convolution layers, and the like, and is not limited specifically.
Further, a loss function is defined in training the third convolution block as follows:
where the above formula LOSS1 represents the LOSS value of the entire third volume block,the total square error is used as a loss function of the face position prediction,the root total error is used as a loss function of the bounding box width and height,andusing sum variance sse (the sum of squares dueto error) as a loss function of confidence,the sum variance SSE is used as a loss function for the class probability.
λ is a given constant, (x)i,yi) Indicates the position of the face bounding box in the predicted cell grid i,representing the actual position of the face in cell i, derived from the training data, (w)i,hi) Representing the width and height of the face bounding box in the predicted cell grid i,representing the actual width and height of the face in cell i, c, derived from the training dataiRepresenting the cell i predicted face bounding box confidence score,representing the intersection of the position of the predicted face bounding box in cell i with the actual position of the face, pi(c) Representing the probability of a predicted value in cell i,representing the probability of the true value in cell i, S2Represents the number of grids, B represents the number of prediction frames per cell,indicating whether a face is present in the cell grid i,indicates that the jth bounding box in the cell grid i predicts the correct class, λcoord=5,λnoobj0.5. The network parameter weights for the third volume block are updated during training according to the value of LOSS1 until LOSS1 is less than a set threshold. It can be understood that, since the input of the third volume block requires the first volume block extraction, in the process of training the third volume block, the first volume block is also trained, resulting in the first subnetwork.
Training a second convolution block of a preset convolution neural network by using the target face detection training data set to obtain a second sub-network;
further, a loss function is defined in training the second convolution block as follows:
where the formula LOSS2 represents the LOSS value of the entire second convolution block, y (x) represents the correctly labeled data, a (x) is the predicted value of the second convolution block, and n represents the number of values. The network parameter weights for the second volume block are updated during the training process based on the value of LOSS2, and the individual training for the third volume block is stopped until LOSS2 is less than a set threshold.
And inputting the target face detection training data set into a preset convolutional neural network for integral training to obtain the face detection convolutional neural network model.
In the specific embodiment of the present application, after the second convolution block and the third convolution block are trained separately, the target face detection training data set is used to train the first convolution block, the second convolution block, and the third convolution block of the preset convolutional neural network uniformly, and the network parameter weights of the second convolution block and the third convolution block are updated according to the face detection or the angle prediction result of the face until the loss function value of any one of the second convolution block and the third convolution block is smaller than the preset value.
In this embodiment, before the whole preset convolutional neural network is trained, the second convolutional block and the third convolutional block are trained separately to obtain the second subnetwork and the third subnetwork, so that the generalization capability of the second subnetwork and the third subnetwork is improved to a certain extent.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a face detection apparatus according to an embodiment of the present application, and as shown in fig. 8, the apparatus includes:
the image acquisition module 81 is used for acquiring a face image to be detected of a target object;
a first feature extraction module 82, configured to input the facial image to be detected into a first sub-network of a pre-trained face detection convolutional neural network model to perform first feature extraction;
the angle prediction module 83 is configured to input the first feature into a second sub-network of the face detection convolutional neural network model to perform angle prediction, so as to obtain an angle of planar rotation of a face in the face image to be detected;
and the position prediction module 84 is configured to input the first feature into a third sub-network of the face detection convolutional neural network model to process the first feature to obtain a third feature, and perform angle prediction on the extracted second feature and the third feature according to the second sub-network to obtain the position of the face in the face image to be detected.
Optionally, as shown in fig. 9, the angle prediction module 83 includes:
a second feature extraction unit 8301, configured to perform convolution processing on the first feature by using multiple convolution layers to obtain the second feature;
a pupil positioning unit 8302, configured to perform classification processing on the second features by using a full connection layer to position two pupils of the face in the face image to be detected;
an angle obtaining unit 8303, configured to calculate, based on the positions of the two pupils, an included angle between a connection line of the two pupils and a horizontal line, so as to obtain an angle of planar rotation of the face in the face image to be detected.
Optionally, as shown in fig. 10, the position prediction module 84 includes:
a feature fusion unit 8401, configured to fuse the second feature and the third feature to obtain a fourth feature including angle information;
and the position obtaining unit 8402 is configured to perform face detection by using the fourth feature to obtain a position of a face in the face image to be detected.
Alternatively, as shown in fig. 11, the position obtaining unit 8402 includes:
a bounding box output unit 84021, configured to process the fourth feature using multiple residual blocks, and output a first point coordinate and a second point coordinate of the face bounding box, and a height of the face bounding box;
a position determining unit 84022, configured to obtain a position of a face in the face image to be detected based on the first point coordinate, the second point coordinate, and the height.
Optionally, as shown in fig. 12, the apparatus further includes:
the first labeling module 85 is configured to label a face image for training for the first time to obtain an initial face detection training data set;
a second labeling module 86, configured to label the face image in the initial face detection training data set again to obtain a target face detection training data set;
the first training module 87 is configured to train a third convolution block of a preset convolution neural network by using the initial face detection training data set to obtain the third sub-network;
a second training module 88, configured to train a second convolution block of a preset convolution neural network with the target face detection training data set, so as to obtain the second sub-network;
and a third training module 89, configured to input the target face detection training data set into a preset convolutional neural network for overall training, so as to obtain the face detection convolutional neural network model.
According to an embodiment of the present application, each step in the face detection method shown in fig. 2 and fig. 3 may be executed by each unit module in the face detection apparatus provided in the embodiment of the present application, and can achieve the same or similar beneficial effects, for example: steps S21 and S31 may be implemented by the image obtaining module 81 in the face detection apparatus, for example: step S23 and step S32 may be implemented by the angle prediction module 83 and the first feature extraction module 82 in the face detection apparatus, respectively, and so on. It should be noted that the face detection device provided in the embodiment of the present application can be applied to actual life scenes such as face detection, face recognition, face search, and the like, and specifically, the face detection device can be applied to apparatuses capable of performing face detection, such as a server, a computer, or a mobile terminal.
Based on the description of the method embodiment and the device embodiment, the embodiment of the application further provides an electronic device. Referring to fig. 13, the electronic device at least includes: a memory 1301 for storing a computer program; the processor 1302 (or CPU) is a computing core and a control core of the electronic device, and is configured to call a computer program stored in the memory 1301 to implement the steps in the embodiment of the face detection method; an input device 1303 for input and an output device 1304 for output, it being understood that the memory 1301, the processor 1302, the input device 1303, and the output device 1304 in the electronic device may be connected via a bus or other means. In one embodiment, the processor 1302 is specifically configured to invoke a computer program to perform the following steps:
acquiring a face image to be detected of a target object;
inputting the face image to be detected into a first sub-network of a pre-training face detection convolutional neural network model to extract a first feature;
inputting the first characteristic into a second sub-network of the face detection convolutional neural network model for angle prediction to obtain the angle of plane rotation of the face in the face image to be detected;
and inputting the first feature into a third sub-network of the face detection convolutional neural network model for processing to obtain a third feature, and performing angle prediction on the extracted second feature and the third feature according to the second sub-network to obtain the position of the face in the face image to be detected.
In an embodiment, the processor 1302 is configured to perform the angle prediction by inputting the first feature into the second sub-network of the face detection convolutional neural network model, so as to obtain an angle of planar rotation of a face in the face image to be detected, and includes:
carrying out convolution processing on the first characteristic by utilizing a plurality of convolution layers to obtain a second characteristic;
classifying the second characteristics by using a full-connection layer to position the positions of two pupils of the face in the face image to be detected;
based on the positions of the two pupils, calculating an included angle between a connecting line of the two pupils and a horizontal line so as to obtain a plane rotation angle of the face in the face image to be detected.
In another embodiment, the processor 1302 is configured to execute the second feature and the third feature extracted by angle prediction according to the second sub-network to obtain the position of the face in the face image to be detected, and includes:
fusing the second feature and the third feature to obtain a fourth feature containing angle information;
and performing face detection by using the fourth feature to obtain the position of the face in the face image to be detected.
In another embodiment, the processor 1302 is configured to execute the face detection by using the fourth feature to obtain a position of a face in the face image to be detected, and includes:
processing the fourth feature by using a plurality of residual blocks, and outputting a first point coordinate and a second point coordinate of the face bounding box and the height of the face bounding box;
and obtaining the position of the face in the face image to be detected based on the first point coordinate, the second point coordinate and the height.
In yet another embodiment, the processor 1302 is configured to perform training of the face detection convolutional neural network model, including:
carrying out primary labeling on a face image for training to obtain an initial face detection training data set;
performing re-labeling on the face image in the initial face detection training data set to obtain a target face detection training data set;
training a third convolution block of a preset convolution neural network by using the initial face detection training data set to obtain a third sub-network;
training a second convolution block of a preset convolution neural network by using the target face detection training data set to obtain a second sub-network;
and inputting the target face detection training data set into a preset convolutional neural network for integral training to obtain the face detection convolutional neural network model.
By way of example, the electronic device includes, but is not limited to, any electronic product that can interact with a user through a keyboard, a mouse, a remote control, a touch pad, or a voice control device, such as: computers, tablet computers, smart phones, smart wearable devices, and Personal Digital Assistants (PDAs), among others. The network where the electronic device is located includes, but is not limited to, the internet, a wide area network, a local area network, a virtual private network, and the like. Electronic devices may include, but are not limited to, memory 1301, processor 1302, input device 1303, output device 1304. It will be appreciated by those skilled in the art that the schematic diagrams are merely examples of an electronic device and are not limiting of an electronic device and may include more or fewer components than those shown, or some components in combination, or different components.
It should be noted that, since the processor 1302 of the electronic device executes the computer program to implement the steps in the above-mentioned face detection method, the embodiments of the face detection method are all applicable to the electronic device, and all can achieve the same or similar beneficial effects.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the above-mentioned face detection method.
In particular, the computer program when executed by the processor implements the steps of:
acquiring a face image to be detected of a target object;
inputting the face image to be detected into a first sub-network of a pre-training face detection convolutional neural network model to extract a first feature;
inputting the first characteristic into a second sub-network of the face detection convolutional neural network model for angle prediction to obtain the angle of plane rotation of the face in the face image to be detected;
and inputting the first feature into a third sub-network of the face detection convolutional neural network model for processing to obtain a third feature, and performing angle prediction on the extracted second feature and the third feature according to the second sub-network to obtain the position of the face in the face image to be detected.
Optionally, the computer program when executed by the processor further implements the steps of: carrying out convolution processing on the first characteristic by utilizing a plurality of convolution layers to obtain a second characteristic; classifying the second characteristics by using a full-connection layer to position the positions of two pupils of the face in the face image to be detected; based on the positions of the two pupils, calculating an included angle between a connecting line of the two pupils and a horizontal line so as to obtain a plane rotation angle of the face in the face image to be detected.
Optionally, the computer program when executed by the processor further implements the steps of: fusing the second feature and the third feature to obtain a fourth feature containing angle information; and performing face detection by using the fourth feature to obtain the position of the face in the face image to be detected.
Optionally, the computer program when executed by the processor further implements the steps of: processing the fourth feature by using a plurality of residual blocks, and outputting a first point coordinate and a second point coordinate of the face bounding box and the height of the face bounding box; and obtaining the position of the face in the face image to be detected based on the first point coordinate, the second point coordinate and the height.
Optionally, the computer program when executed by the processor further implements the steps of: carrying out primary labeling on a face image for training to obtain an initial face detection training data set; performing re-labeling on the face image in the initial face detection training data set to obtain a target face detection training data set; training a third convolution block of a preset convolution neural network by using the initial face detection training data set to obtain a third sub-network; training a second convolution block of a preset convolution neural network by using the target face detection training data set to obtain a second sub-network; and inputting the target face detection training data set into a preset convolutional neural network for integral training to obtain the face detection convolutional neural network model.
Illustratively, the computer program of the computer-readable storage medium comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
It should be noted that, since the computer program of the computer-readable storage medium is executed by the processor to implement the steps in the above-mentioned face detection method, all the embodiments of the face detection method are applicable to the computer-readable storage medium, and can achieve the same or similar beneficial effects.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. A face detection method, comprising:
acquiring a face image to be detected of a target object;
inputting the face image to be detected into a first sub-network of a pre-training face detection convolutional neural network model to extract a first feature;
inputting the first characteristic into a second sub-network of the face detection convolutional neural network model for angle prediction to obtain the angle of plane rotation of the face in the face image to be detected;
and inputting the first feature into a third sub-network of the face detection convolutional neural network model for processing to obtain a third feature, and performing angle prediction on the extracted second feature and the third feature according to the second sub-network to obtain the position of the face in the face image to be detected.
2. The method according to claim 1, wherein the inputting the first feature into the second sub-network of the face detection convolutional neural network model for angle prediction to obtain an angle of planar rotation of a face in the face image to be detected comprises:
carrying out convolution processing on the first characteristic by utilizing a plurality of convolution layers to obtain a second characteristic;
classifying the second characteristics by using a full-connection layer to position the positions of two pupils of the face in the face image to be detected;
based on the positions of the two pupils, calculating an included angle between a connecting line of the two pupils and a horizontal line so as to obtain a plane rotation angle of the face in the face image to be detected.
3. The method according to claim 1, wherein the obtaining the position of the face in the face image to be detected according to the second feature and the third feature extracted by angle prediction according to the second sub-network comprises:
fusing the second feature and the third feature to obtain a fourth feature containing angle information;
and performing face detection by using the fourth feature to obtain the position of the face in the face image to be detected.
4. The method according to claim 3, wherein the performing face detection by using the fourth feature to obtain the position of the face in the face image to be detected comprises:
processing the fourth feature by using a plurality of residual blocks, and outputting a first point coordinate and a second point coordinate of the face bounding box and the height of the face bounding box;
and obtaining the position of the face in the face image to be detected based on the first point coordinate, the second point coordinate and the height.
5. The method according to any one of claims 1 to 4, wherein the training process of the face detection convolutional neural network model comprises:
carrying out primary labeling on a face image for training to obtain an initial face detection training data set;
performing re-labeling on the face image in the initial face detection training data set to obtain a target face detection training data set;
training a third convolution block of a preset convolution neural network by using the initial face detection training data set to obtain a third sub-network;
training a second convolution block of a preset convolution neural network by using the target face detection training data set to obtain a second sub-network;
and inputting the target face detection training data set into a preset convolutional neural network for integral training to obtain the face detection convolutional neural network model.
6. An apparatus for face detection, the apparatus comprising:
the image acquisition module is used for acquiring a face image to be detected of the target object;
the first feature extraction module is used for inputting the face image to be detected into a first sub-network of a pre-training face detection convolutional neural network model to extract a first feature;
the angle prediction module is used for inputting the first characteristic into a second sub-network of the face detection convolutional neural network model to carry out angle prediction so as to obtain the angle of plane rotation of the face in the face image to be detected;
and the position prediction module is used for inputting the first feature into a third sub-network of the face detection convolutional neural network model for processing to obtain a third feature, and performing angle prediction on the extracted second feature and the third feature according to the second sub-network to obtain the position of the face in the face image to be detected.
7. The apparatus of claim 6, wherein the angle prediction module comprises:
a second feature extraction unit, configured to perform convolution processing on the first feature by using multiple convolution layers to obtain the second feature;
the pupil positioning unit is used for classifying the second characteristics by utilizing a full connection layer so as to position the positions of two pupils of the face in the face image to be detected;
and the angle acquisition unit is used for calculating an included angle between a connecting line of the two pupils and a horizontal line based on the positions of the two pupils so as to obtain the plane rotation angle of the face in the face image to be detected.
8. The apparatus of claim 6, wherein the location prediction module comprises:
the feature fusion unit is used for fusing the second feature and the third feature to obtain a fourth feature containing angle information;
and the position acquisition unit is used for carrying out face detection by utilizing the fourth characteristic to obtain the position of the face in the face image to be detected.
9. An electronic device, comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps in the face detection as claimed in any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps in the face detection method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911232487.5A CN112906446B (en) | 2019-12-04 | 2019-12-04 | Face detection method, face detection device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911232487.5A CN112906446B (en) | 2019-12-04 | 2019-12-04 | Face detection method, face detection device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112906446A true CN112906446A (en) | 2021-06-04 |
CN112906446B CN112906446B (en) | 2024-07-05 |
Family
ID=76110836
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911232487.5A Active CN112906446B (en) | 2019-12-04 | 2019-12-04 | Face detection method, face detection device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112906446B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107871106A (en) * | 2016-09-26 | 2018-04-03 | 北京眼神科技有限公司 | Face detection method and device |
CN107871099A (en) * | 2016-09-23 | 2018-04-03 | 北京眼神科技有限公司 | Face detection method and apparatus |
US20190026538A1 (en) * | 2017-07-21 | 2019-01-24 | Altumview Systems Inc. | Joint face-detection and head-pose-angle-estimation using small-scale convolutional neural network (cnn) modules for embedded systems |
CN109635755A (en) * | 2018-12-17 | 2019-04-16 | 苏州市科远软件技术开发有限公司 | Face extraction method, apparatus and storage medium |
WO2019128646A1 (en) * | 2017-12-28 | 2019-07-04 | 深圳励飞科技有限公司 | Face detection method, method and device for training parameters of convolutional neural network, and medium |
CN110309706A (en) * | 2019-05-06 | 2019-10-08 | 深圳市华付信息技术有限公司 | Face critical point detection method, apparatus, computer equipment and storage medium |
-
2019
- 2019-12-04 CN CN201911232487.5A patent/CN112906446B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107871099A (en) * | 2016-09-23 | 2018-04-03 | 北京眼神科技有限公司 | Face detection method and apparatus |
CN107871106A (en) * | 2016-09-26 | 2018-04-03 | 北京眼神科技有限公司 | Face detection method and device |
US20190026538A1 (en) * | 2017-07-21 | 2019-01-24 | Altumview Systems Inc. | Joint face-detection and head-pose-angle-estimation using small-scale convolutional neural network (cnn) modules for embedded systems |
WO2019128646A1 (en) * | 2017-12-28 | 2019-07-04 | 深圳励飞科技有限公司 | Face detection method, method and device for training parameters of convolutional neural network, and medium |
CN109635755A (en) * | 2018-12-17 | 2019-04-16 | 苏州市科远软件技术开发有限公司 | Face extraction method, apparatus and storage medium |
CN110309706A (en) * | 2019-05-06 | 2019-10-08 | 深圳市华付信息技术有限公司 | Face critical point detection method, apparatus, computer equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
冯相明;潘炼;: "基于LBP和YOLO的人脸检测方法", 电视技术, no. 18 * |
刘英剑;张起贵;: "基于Edge Boxes和深度学习的非限制条件下人脸检测", 现代电子技术, no. 13, 3 July 2018 (2018-07-03) * |
邓宗平;赵启军;陈虎;: "基于深度学习的人脸姿态分类方法", 计算机技术与发展, no. 07 * |
Also Published As
Publication number | Publication date |
---|---|
CN112906446B (en) | 2024-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108764048B (en) | Face key point detection method and device | |
US10713532B2 (en) | Image recognition method and apparatus | |
WO2021077984A1 (en) | Object recognition method and apparatus, electronic device, and readable storage medium | |
US20200250461A1 (en) | Target detection method, apparatus, and system | |
CN105512627B (en) | A kind of localization method and terminal of key point | |
WO2018028546A1 (en) | Key point positioning method, terminal, and computer storage medium | |
EP3839807A1 (en) | Facial landmark detection method and apparatus, computer device and storage medium | |
CN109584276A (en) | Critical point detection method, apparatus, equipment and readable medium | |
TW202026948A (en) | Methods and devices for biological testing and storage medium thereof | |
CN112560753B (en) | Face recognition method, device, equipment and storage medium based on feature fusion | |
CN110490959B (en) | Three-dimensional image processing method and device, virtual image generating method and electronic equipment | |
CN112949507A (en) | Face detection method and device, computer equipment and storage medium | |
US20230041943A1 (en) | Method for automatically producing map data, and related apparatus | |
CN112818995B (en) | Image classification method, device, electronic equipment and storage medium | |
CN109325456A (en) | Target identification method, device, target identification equipment and storage medium | |
CN116468392A (en) | Method, device, equipment and storage medium for monitoring progress of power grid engineering project | |
CN111898561B (en) | Face authentication method, device, equipment and medium | |
CN103105924A (en) | Man-machine interaction method and device | |
CN109902681B (en) | User group relation determining method, device, equipment and storage medium | |
US11893773B2 (en) | Finger vein comparison method, computer equipment, and storage medium | |
CN111507259B (en) | Face feature extraction method and device and electronic equipment | |
CN113343898A (en) | Mask shielding face recognition method, device and equipment based on knowledge distillation network | |
CN111353325A (en) | Key point detection model training method and device | |
CN111881740A (en) | Face recognition method, face recognition device, electronic equipment and medium | |
Ling et al. | Research on gesture recognition based on YOLOv5 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |