CN108509828A - A kind of face identification method and face identification device - Google Patents
A kind of face identification method and face identification device Download PDFInfo
- Publication number
- CN108509828A CN108509828A CN201710110054.7A CN201710110054A CN108509828A CN 108509828 A CN108509828 A CN 108509828A CN 201710110054 A CN201710110054 A CN 201710110054A CN 108509828 A CN108509828 A CN 108509828A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- network
- training
- model parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The present invention is suitable for image processing field, provides a kind of face identification method and face identification device.The key point of face is determined from image, and the face rectangle frame of face making is determined according to the key point;Rotation correction, the facial image faced are carried out to the face rectangle frame;Perspective transform is carried out from least two angles to the facial image, obtains M sample image;The M sample image is trained using training network, training obtains network model parameter, and is based on the network model parameter extraction face characteristic description vectors;Recognition of face is carried out according to the face characteristic description vectors.By perspective transform, reduce variation complexity compared with the existing technology, save resource and cost, and also achieve the increase of sample image, ensure that the accuracy of recognition of face.
Description
Technical field
The invention belongs to image processing field more particularly to a kind of face identification methods and face identification device.
Background technology
Active demand with each field to auto authentication technology rapidly and efficiently, face recognition technology is because with non-
The advantages that contact, simple collecting device and become current research hotspot.But since the face structure of people is more complicated, and
Face structure can be changed because of the influence of the factors such as expression, illumination, face recognition technology is difficult to be widely applied to real life
In.
In recent years, as the magnanimity of data increases the development with depth learning technology, the accuracy rate of face recognition technology
Also obtained qualitative leap, such as DeepID, FaceNet, DeepFace based on the faceform of deep neural network in face
Preferable descriptive power is shown in terms of feature extraction, effectively improve face alignment, recognition of face accuracy rate.
However, depth learning technology efficiently all relies on a large amount of training sample, the data in recognition of face direction at present
Sample still lacks, and flag data sample needs to expend a large amount of human cost.Therefore, at present mostly use Image Reversal,
The technologies such as random cropping increase the quantity of training sample, and it is very few caused which can solve training sample to a certain extent
Model overfitting problem promotes recognition accuracy.But in training data wretched insufficiency, over-fitting still has, and leads
The problems such as generalization ability of cause depth model is weak, and discrimination is low.
Invention content
In consideration of it, a kind of face identification method of present invention offer and face identification device, to fast implement full map face
Identification.
On the one hand, the present invention provides a kind of face identification method, the method includes:
The key point of face is determined from image, and the face rectangle frame of face making is determined according to the key point;
Rotation correction, the facial image faced are carried out to the face rectangle frame;
Perspective transform is carried out from least two angles to the facial image, obtains M sample image, the M for more than
Or the positive integer equal to 2;
The M sample image is trained using training network, training obtains network model parameter, and based on described
Network model parameter extraction face characteristic description vectors;
Recognition of face is carried out according to the face characteristic description vectors.
Another aspect, the present invention provide a kind of face identification device, including:
Determination unit, the key point for determining face from image, and face making is determined according to the key point
Face rectangle frame;
Unit is corrected, for carrying out rotation correction, the facial image faced to the face rectangle frame;
Perspective transform unit obtains M sample for carrying out perspective transform from least two angles to the facial image
Image, the M are the positive integer more than or equal to 2;
Training unit, for being trained to the M sample image using training network, training obtains network model ginseng
Number, and it is based on the network model parameter extraction face characteristic description vectors;
Face identification unit, for carrying out recognition of face according to the face characteristic description vectors.
The beneficial effects of the invention are as follows:The key point of face is determined from image, and is selected according to the key point determination block
The face rectangle frame of face;Rotation correction, the facial image faced are carried out to the face rectangle frame;To the people
Face image carries out perspective transform from least two angles, obtains M sample image;Using training network to the M sample graph
As being trained, training obtains network model parameter, and is based on the network model parameter extraction face characteristic description vectors;Root
Recognition of face is carried out according to the face characteristic description vectors.By perspective transform, it is complicated to reduce variation compared with the existing technology
Degree, saves resource and cost, and also achieve the increase of sample image, ensure that the accuracy of recognition of face.
Description of the drawings
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description be only the present invention some
Embodiment for those of ordinary skill in the art without having to pay creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is a kind of schematic flow chart of face identification method provided in an embodiment of the present invention;
Fig. 2A is a kind of schematic diagram of face making;
Fig. 2 B are a kind of schematic diagrames that the facial image faced obtained after rotation correction is carried out to face rectangle frame;
Fig. 2 C are to obtain a kind of schematic diagram by taking father's head portrait as an example after perspective transform;
Fig. 2 D are a kind of schematic diagrames that 10 facial areas are divided by taking father's head portrait as an example;
Fig. 2 E are a kind of schematic diagrames of the face sample of 3 different scales of 2 facial areas by taking father's head portrait as an example;
Fig. 3 is the composite structural diagram of face identification device provided in an embodiment of the present invention.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
Embodiment of the method
The present embodiment provides a kind of face identification method from face identification device angle, referring to Fig. 1, including step S101,
Step S102, step S103, step S104 and step S105.
Step S101 determines the key point of face from image, and the face of face making is determined according to the key point
Rectangle frame.
, it should be understood that there is the region of one or more faces in the image, such as shown in Fig. 2A.
The embodiment of the present invention determines the key point of face according to face characteristic in the picture.
Based on Fig. 2A for example, using the classification and Detection method based on sliding window, each face in image is determined
Possibility central point and facial size, and sliding window (i.e. people in Fig. 2A is created according to determining central point and facial size
Face rectangle frame), then depth network is used to carry out classification and Detection.
Preferably, the deep neural network based on face key point can also be trained, the face detected is carried out crucial
Point location, then the drawing human-face rectangle frame after crucial point location.
Step S102 carries out rotation correction, the facial image faced to the face rectangle frame.
The embodiment of the present invention does not limit the realization method of rotation correction, such as can be vertical based on 3D faceforms
The transformation matrix for the rotation correction that sports school's positive way or the present invention specially provide.
As shown in Figure 2 B, after carrying out rotation correction to the face rectangle frame, the facial image corrected is faced.
Step S103 carries out perspective transform from least two angles to the facial image, obtains M sample image, institute
It is the positive integer more than or equal to 2 to state M.
In order to increase the sample size of sample image, the perspective transform of a little three-dimensionals can be done to the facial image;Usually
Perspective transform at least is done from two angles, obtains more sample images, such as M sample image.
Optionally, step S103 can use left side, right side, the transformation matrix for overlooking and looking up four angles respectively, it is right
The facial image carries out perspective transform.
Specifically, transformation matrix that can be on the left of use, carries out perspective transform to the facial image, obtains multiple samples
Image.
Transformation matrix that can respectively on the right side of use, carries out perspective transform to the facial image, obtains multiple sample graphs
Picture.
The transformation matrix overlooked can be used respectively, and perspective transform is carried out to the facial image, obtains multiple sample graphs
Picture.
Perspective transform can be carried out to the facial image, obtain multiple sample graphs respectively using the transformation matrix looked up
Picture.
For example, use left side, right side, the transformation matrix for overlooking and looking up four angles respectively, to the father in Fig. 2 B
Father's head portrait does the perspective transform at small visual angle, produces 12 similar images with small visual angle change, as shown in Figure 2 C.
, it should be understood that left side, right side, overlooking and looking up four angles only and be citing, perspective change can be carried out from different degree of freedom
It changes, obtains a large amount of sample image.
, it should be understood which degree of freedom can carry out perspective transform according to application scenarios difference, selection from, obtain largely carrying
The sample image of high recognition of face success rate.
Step S104 is trained the M sample image using training network, and training obtains network model parameter,
And it is based on the network model parameter extraction face characteristic description vectors.
Optionally, described to state trained network as convolutional network.
The quantity of sample image can also be extended before executing step S104.Specifically, respectively from the M sample graph
N number of facial area is divided as in, the N is the positive integer more than the M.Preferably, when dividing N number of facial area, respectively
N number of facial area of different scale is chosen from the M sample image.
After obtaining N number of facial area, when executing step S104 correspondingly using training network to the M sample graph
It is to be trained to N number of facial area using the trained network as being trained, training obtains network model parameter, and
Based on the network model parameter extraction face characteristic description vectors.
Described to be trained to the M sample image using training network, training obtains network model parameter, and is based on
The network model parameter extraction face characteristic description vectors include:N number of facial area is instructed using the trained network
Practice, training obtains network model parameter, and is based on the network model parameter extraction face characteristic description vectors.
For example, on the RGB image and gray level image of a sample image of such as Fig. 2 C, 10 faces are extracted respectively
Region (as shown in Figure 2 D);The transformation for doing 3 different scales to each facial area again obtains 60 area images, such as schemes
2E shows the face sample instantiation of 3 different scales of 2 facial areas.It is directed to 60 area images respectively and trains 60 volumes
Product network extracts the face characteristic description vectors of 160 dimensions from each convolutional network, combines to form the face of 160X60 dimensions
Feature description vector.
Step S105 carries out recognition of face according to the face characteristic description vectors.
, it should be understood that the face characteristic description vectors have for the essential feature needed for recognition of face.
Below in the present invention step 104 and step 105 do a concrete application occasion for example, packet is respectively adopted
Network training, network are carried out to all facial areas marked off from all sample images containing 37 layers of VGGNet network structures
Structure and parameter are as shown in table 1.Wherein, Layer, Type, name indicate a layer index, channel type, layer name respectively;
Support, filtdim, numfilts, stride, pad indicate every layer of design parameter.
1 network structure of table and parameter
After the completion of training, the fc7 layer output vectors of 60 networks are extracted respectively, and then combination forms the feature of 160x60 dimensions
Vector.The similarity between 2 target faces is finally weighed using COS distance when executing step S105, according to certain threshold value reality
Existing confirming face.And then target recognition of face is realized in the database comprising great amount of samples image.
Apparatus embodiments one:
It should be noted that apparatus embodiments provide face identification device, it is corresponding with above method embodiment, it can be achieved that
Above method embodiment, therefore the specific implementation of face identification device each unit for including, reference can be made in embodiment of the method
Corresponding description.
The face identification device that this apparatus embodiments provides, referring to Fig. 3, the face identification device includes:
Determination unit 211, the key point for determining face from image, and face making is determined according to the key point
Face rectangle frame;
Unit 212 is corrected, for carrying out rotation correction, the facial image faced to the face rectangle frame;
Perspective transform unit 213 obtains M for carrying out perspective transform from least two angles to the facial image
Sample image, the M are the positive integer more than or equal to 2;
Training unit 214, for being trained to the M sample image using training network, training obtains network mould
Shape parameter, and it is based on the network model parameter extraction face characteristic description vectors;
Face identification unit 215, for carrying out recognition of face according to the face characteristic description vectors.
Optionally, the training unit 214, for dividing N number of facial area, institute from the M sample image respectively
It is the positive integer more than the M to state N, is trained to N number of facial area using the trained network, and training obtains network model
Parameter, and it is based on the network model parameter extraction face characteristic description vectors.
Optionally, the training unit 214, N number of face for choosing different scale from the M sample image respectively
Portion region.
Optionally, described to state trained network as convolutional network.
Optionally, four angles are overlooked and looked up to the perspective transform unit 213 for the left side of use respectively, right side,
Transformation matrix carries out perspective transform to the facial image.
Apparatus embodiments two:
This apparatus embodiments provides a kind of face identification device, including processor and memory;
The memory stores computer instruction;
The processor executes the computer instruction in memory so that the face identification device executes embodiment of the method
The face identification method of offer.
Program product
This apparatus embodiments provides a kind of storage medium, which can be above-mentioned memory;The storage medium
For storing computer instruction;
The processor of the face identification device executes the computer instruction in the storage medium so that the recognition of face
Device executes the face identification method that embodiment of the method provides.
This apparatus embodiments provides a kind of computer program, and the processor of the face identification device executes the computer journey
The computer instruction that sequence includes so that the face identification device executes the face identification method that embodiment of the method provides.In addition,
The computer program can be stored in above-mentioned storage medium.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each work(
Can unit division progress for example, in practical application, can be as needed and by above-mentioned function distribution by different functions
Unit, module are completed, i.e., the internal structure of the face identification device are divided into different functional units or module, to complete
All or part of function described above.Each functional unit in embodiment can be integrated in a processing unit, also may be used
It, can also be above-mentioned integrated during two or more units are integrated in one unit to be that each unit physically exists alone
The form that hardware had both may be used in unit is realized, can also be realized in the form of SFU software functional unit.In addition, each functional unit
Specific name also only to facilitate mutually distinguish, the protection domain being not intended to limit this application.
Those of ordinary skill in the art may realize that lists described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, depends on the specific application and design constraint of technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
In embodiment provided by the present invention, it should be understood that disclosed device, it can be real by another way
It is existing.For example, face identification device embodiment described above is only schematical, for example, the division of the unit, only
For a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component can combine
Or it is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed phase
Coupling or direct-coupling or communication connection between mutually can be by some interfaces, the INDIRECT COUPLING or communication of device or unit
Connection can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple
In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme
's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also
It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list
The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can be stored in a computer read/write memory medium.Based on this understanding, the technical solution of the embodiment of the present invention
Substantially all or part of the part that contributes to existing technology or the technical solution can be with software product in other words
Form embody, which is stored in a storage medium, including some instructions use so that one
Computer equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute this hair
The all or part of step of bright each embodiment the method for embodiment.And storage medium above-mentioned includes:USB flash disk, mobile hard disk,
Read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic
The various media that can store program code such as dish or CD.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although with reference to aforementioned reality
Applying example, invention is explained in detail, it will be understood by those of ordinary skill in the art that:It still can be to aforementioned each
Technical solution recorded in embodiment is modified or equivalent replacement of some of the technical features;And these are changed
Or it replaces, the spirit and model of each embodiment technical solution of the embodiment of the present invention that it does not separate the essence of the corresponding technical solution
It encloses.
Claims (10)
1. a kind of face identification method, which is characterized in that including:
The key point of face is determined from image, and the face rectangle frame of face making is determined according to the key point;
Rotation correction, the facial image faced are carried out to the face rectangle frame;
Perspective transform is carried out from least two angles to the facial image, obtains M sample image, the M is to be more than or wait
In 2 positive integer;
The M sample image is trained using training network, training obtains network model parameter, and is based on the network
Model parameter extraction face characteristic description vectors;
Recognition of face is carried out according to the face characteristic description vectors.
2. face identification method as described in claim 1, which is characterized in that
Before the use training network is trained the M sample image, the method includes:Respectively from the M
N number of facial area is divided in sample image, the N is the positive integer more than the M;
Described to be trained to the M sample image using training network, training obtains network model parameter, and based on described
Network model parameter extraction face characteristic description vectors include:N number of facial area is trained using the trained network, is instructed
Network model parameter is got, and is based on the network model parameter extraction face characteristic description vectors.
3. face identification method as claimed in claim 2, which is characterized in that described to be drawn from the M sample image respectively
Point N number of facial area includes:
N number of facial area of different scale is chosen from the M sample image respectively.
4. face identification method as described in any one of claims 1 to 3, which is characterized in that the trained network is convolution net
Network.
5. face identification method as described in claim 1, which is characterized in that the facial image from least two angles into
Row perspective transform, including:
Use left side, right side, the transformation matrix for overlooking and looking up four angles respectively carry out perspective change to the facial image
It changes.
6. a kind of face identification device, which is characterized in that including:
Determination unit is used to determine the key point of face from image, and the face of face making is determined according to the key point
Rectangle frame;
Unit is corrected, for carrying out rotation correction, the facial image faced to the face rectangle frame;
Perspective transform unit obtains M sample graph for carrying out perspective transform from least two angles to the facial image
Picture, the M are the positive integer more than or equal to 2;
Training unit, for being trained to the M sample image using training network, training obtains network model parameter,
And it is based on the network model parameter extraction face characteristic description vectors;
Face identification unit, for carrying out recognition of face according to the face characteristic description vectors.
7. face identification device as claimed in claim 6, which is characterized in that
The training unit, for dividing N number of facial area from the M sample image respectively, the N is more than the M
Positive integer, N number of facial area is trained using the trained network, training obtains network model parameter, and is based on institute
State network model parameter extraction face characteristic description vectors.
8. face identification device as claimed in claim 7, which is characterized in that
The training unit, N number of facial area for choosing different scale from the M sample image respectively.
9. such as claim 6 to 8 any one of them face identification device, which is characterized in that described to state trained network as convolution
Network.
10. face identification device as claimed in claim 6, which is characterized in that
The perspective transform unit, for the left side of use respectively, right side, the transformation matrix for overlooking and looking up four angles, to institute
It states facial image and carries out perspective transform.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710110054.7A CN108509828A (en) | 2017-02-28 | 2017-02-28 | A kind of face identification method and face identification device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710110054.7A CN108509828A (en) | 2017-02-28 | 2017-02-28 | A kind of face identification method and face identification device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108509828A true CN108509828A (en) | 2018-09-07 |
Family
ID=63374014
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710110054.7A Pending CN108509828A (en) | 2017-02-28 | 2017-02-28 | A kind of face identification method and face identification device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108509828A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109543633A (en) * | 2018-11-29 | 2019-03-29 | 上海钛米机器人科技有限公司 | A kind of face identification method, device, robot and storage medium |
CN111275005A (en) * | 2020-02-21 | 2020-06-12 | 腾讯科技(深圳)有限公司 | Drawn face image recognition method, computer-readable storage medium and related device |
CN112464897A (en) * | 2020-12-15 | 2021-03-09 | 南方电网电力科技股份有限公司 | Electric power operator screening method and device |
CN116110099A (en) * | 2023-01-19 | 2023-05-12 | 北京百度网讯科技有限公司 | Head portrait generating method and head portrait replacing method |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030086593A1 (en) * | 2001-05-31 | 2003-05-08 | Chengjun Liu | Feature based classification |
CN102496275A (en) * | 2011-11-25 | 2012-06-13 | 大连海创高科信息技术有限公司 | Method for detecting overload of coach or not |
CN104036323A (en) * | 2014-06-26 | 2014-09-10 | 叶茂 | Vehicle detection method based on convolutional neural network |
CN105654049A (en) * | 2015-12-29 | 2016-06-08 | 中国科学院深圳先进技术研究院 | Facial expression recognition method and device |
CN105809184A (en) * | 2015-10-30 | 2016-07-27 | 哈尔滨工程大学 | Vehicle real-time identification tracking and parking space occupancy determining method suitable for gas station |
CN105989357A (en) * | 2016-01-18 | 2016-10-05 | 合肥工业大学 | Human face video processing-based heart rate detection method |
CN106251294A (en) * | 2016-08-11 | 2016-12-21 | 西安理工大学 | A kind of single width is faced the virtual multi-pose of facial image and is generated method |
CN106295530A (en) * | 2016-07-29 | 2017-01-04 | 北京小米移动软件有限公司 | Face identification method and device |
CN106408015A (en) * | 2016-09-13 | 2017-02-15 | 电子科技大学成都研究院 | Road fork identification and depth estimation method based on convolutional neural network |
CN106446779A (en) * | 2016-08-29 | 2017-02-22 | 深圳市软数科技有限公司 | Method and apparatus for identifying identity |
-
2017
- 2017-02-28 CN CN201710110054.7A patent/CN108509828A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030086593A1 (en) * | 2001-05-31 | 2003-05-08 | Chengjun Liu | Feature based classification |
CN102496275A (en) * | 2011-11-25 | 2012-06-13 | 大连海创高科信息技术有限公司 | Method for detecting overload of coach or not |
CN104036323A (en) * | 2014-06-26 | 2014-09-10 | 叶茂 | Vehicle detection method based on convolutional neural network |
CN105809184A (en) * | 2015-10-30 | 2016-07-27 | 哈尔滨工程大学 | Vehicle real-time identification tracking and parking space occupancy determining method suitable for gas station |
CN105654049A (en) * | 2015-12-29 | 2016-06-08 | 中国科学院深圳先进技术研究院 | Facial expression recognition method and device |
CN105989357A (en) * | 2016-01-18 | 2016-10-05 | 合肥工业大学 | Human face video processing-based heart rate detection method |
CN106295530A (en) * | 2016-07-29 | 2017-01-04 | 北京小米移动软件有限公司 | Face identification method and device |
CN106251294A (en) * | 2016-08-11 | 2016-12-21 | 西安理工大学 | A kind of single width is faced the virtual multi-pose of facial image and is generated method |
CN106446779A (en) * | 2016-08-29 | 2017-02-22 | 深圳市软数科技有限公司 | Method and apparatus for identifying identity |
CN106408015A (en) * | 2016-09-13 | 2017-02-15 | 电子科技大学成都研究院 | Road fork identification and depth estimation method based on convolutional neural network |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109543633A (en) * | 2018-11-29 | 2019-03-29 | 上海钛米机器人科技有限公司 | A kind of face identification method, device, robot and storage medium |
CN111275005A (en) * | 2020-02-21 | 2020-06-12 | 腾讯科技(深圳)有限公司 | Drawn face image recognition method, computer-readable storage medium and related device |
CN112464897A (en) * | 2020-12-15 | 2021-03-09 | 南方电网电力科技股份有限公司 | Electric power operator screening method and device |
CN112464897B (en) * | 2020-12-15 | 2021-09-24 | 南方电网电力科技股份有限公司 | Electric power operator screening method and device |
CN116110099A (en) * | 2023-01-19 | 2023-05-12 | 北京百度网讯科技有限公司 | Head portrait generating method and head portrait replacing method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110309837B (en) | Data processing method and image processing method based on convolutional neural network characteristic diagram | |
US11354797B2 (en) | Method, device, and system for testing an image | |
CN110874594A (en) | Human body surface damage detection method based on semantic segmentation network and related equipment | |
CN107609466A (en) | Face cluster method, apparatus, equipment and storage medium | |
CN108509828A (en) | A kind of face identification method and face identification device | |
CN109063678B (en) | Face image recognition method, device and storage medium | |
CN112419326B (en) | Image segmentation data processing method, device, equipment and storage medium | |
CN111160351B (en) | Fast high-resolution image segmentation method based on block recommendation network | |
CN110866469B (en) | Facial five sense organs identification method, device, equipment and medium | |
CN110765882A (en) | Video tag determination method, device, server and storage medium | |
CN106651973A (en) | Image structuring method and device | |
CN110335139A (en) | Appraisal procedure, device, equipment and readable storage medium storing program for executing based on similarity | |
CN110852257A (en) | Method and device for detecting key points of human face and storage medium | |
CN109145720A (en) | A kind of face identification method and device | |
CN110287831A (en) | A kind of acquisition methods, device and the electronic equipment at the control point based on terrestrial reference | |
CN115830402A (en) | Fine-grained image recognition classification model training method, device and equipment | |
CN106709431A (en) | Iris recognition method and device | |
CN109753873A (en) | Image processing method and relevant apparatus | |
CN112017162B (en) | Pathological image processing method, pathological image processing device, storage medium and processor | |
CN111429414B (en) | Artificial intelligence-based focus image sample determination method and related device | |
CN110321892A (en) | A kind of picture screening technique, device and electronic equipment | |
CN108388886A (en) | Method, apparatus, terminal and the computer readable storage medium of image scene identification | |
CN112734772A (en) | Image processing method, image processing apparatus, electronic device, and storage medium | |
CN112749576A (en) | Image recognition method and device, computing equipment and computer storage medium | |
CN106874835B (en) | A kind of image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180907 |
|
RJ01 | Rejection of invention patent application after publication |