[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111881740A - Face recognition method, face recognition device, electronic equipment and medium - Google Patents

Face recognition method, face recognition device, electronic equipment and medium Download PDF

Info

Publication number
CN111881740A
CN111881740A CN202010567110.1A CN202010567110A CN111881740A CN 111881740 A CN111881740 A CN 111881740A CN 202010567110 A CN202010567110 A CN 202010567110A CN 111881740 A CN111881740 A CN 111881740A
Authority
CN
China
Prior art keywords
characteristic vector
face
occlusion
face recognition
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010567110.1A
Other languages
Chinese (zh)
Other versions
CN111881740B (en
Inventor
肖传宝
梁佳
杜永生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Moredian Technology Co ltd
Original Assignee
Hangzhou Moredian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Moredian Technology Co ltd filed Critical Hangzhou Moredian Technology Co ltd
Priority to CN202010567110.1A priority Critical patent/CN111881740B/en
Publication of CN111881740A publication Critical patent/CN111881740A/en
Application granted granted Critical
Publication of CN111881740B publication Critical patent/CN111881740B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face recognition method, which relates to the technical field of face recognition, and specifically comprises the following steps: establishing a mapping network, wherein the mapping network has a mapping relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to a sample face image with an occlusion region, and the target characteristic vector corresponds to a sample face image without the occlusion region; acquiring a characteristic vector a, wherein the characteristic vector a corresponds to a face image to be recognized with an occlusion area; inputting the feature vector a into a mapping network to obtain a feature vector b; and judging whether the characteristic vector b is matched with the characteristic vector c in the characteristic vector base or not, if so, outputting a recognition success signal, and enabling the characteristic vector c to correspond to a pre-stored face image without an occlusion area. The method can perform rapid face recognition on the face image with the shielding area so as to improve the user experience. The invention also discloses a face recognition device, electronic equipment and a computer readable storage medium.

Description

Face recognition method, face recognition device, electronic equipment and medium
Technical Field
The present invention relates to the field of face recognition technologies, and in particular, to a face recognition method, an apparatus, an electronic device, and a medium.
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces.
However, the existing face recognition is usually based on the whole face, when the face has a large shielding object, such as sunglasses and masks, the face recognition cannot be normally performed or directly determined as not passing, so that the person to be recognized needs to take down the corresponding shielding object and continue wearing after completing the recognition, which reduces the recognition efficiency and affects the user experience.
Disclosure of Invention
In order to overcome the defects of the prior art, an object of the present invention is to provide a face recognition method, which can perform fast face recognition on a face image with an occlusion region, so as to improve user experience.
One of the purposes of the invention is realized by adopting the following technical scheme:
a face recognition method comprises the following steps:
establishing a mapping network, wherein the mapping network has a mapping relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to a sample face image with an occlusion region, and the target characteristic vector corresponds to a sample face image without the occlusion region;
acquiring a characteristic vector a, wherein the characteristic vector a corresponds to a face image to be recognized with an occlusion area;
inputting the feature vector a into the mapping network to obtain a feature vector b;
and judging whether the characteristic vector b is matched with a characteristic vector c in a characteristic vector base, if so, outputting a recognition success signal, wherein the characteristic vector c corresponds to a pre-stored face image of a non-shielding area.
Further, establishing a mapping network, comprising the steps of:
acquiring a source sample, inputting the source sample into a face recognition model to obtain a source characteristic vector, wherein the source sample is a sample face image without an occlusion area;
inputting the source sample into an occlusion model to obtain a target sample, inputting the target sample into a face recognition model to obtain a target characteristic vector, wherein the target sample is a sample face image with an occlusion area;
and training a mapping network, wherein the target feature vector is used as the input of the mapping network, and the source feature vector is used as the output of the mapping network.
Further, obtaining the feature vector a comprises the following steps:
receiving an image to be detected;
inputting the image to be detected into a face detection model to obtain a face area;
and judging whether the face area has a shielding area, if so, recording the face area as a face image to be recognized, and inputting the face image to be recognized into a face recognition model to obtain a characteristic vector a.
Further, the method also comprises the following steps:
when the face area has no occlusion area, recording the face area as a complete face image;
inputting the complete face image into a face recognition model to obtain a characteristic vector d;
and judging whether the feature vector d is matched with the feature vector c in the feature vector base, and if so, outputting an identification success signal.
Further, the mapping network has more than one mapping model, and each mapping model is respectively associated with an occlusion region type; inputting the feature vector a into the mapping network to obtain a feature vector b, comprising the following steps:
inquiring the occlusion region type k of the feature vector a;
obtaining a related mapping model D according to the occlusion region type k;
and inputting the feature vector a into the mapping model D to obtain a feature vector b.
Further, inquiring the occlusion region type k of the feature vector a, wherein the method comprises the following steps;
acquiring a face image to be recognized with a shielding area;
and inputting the facial image to be recognized into a classification model to obtain a shielding area type k.
Further, the occlusion region type k includes any one or more combinations of left eye occlusion, right eye occlusion, nose occlusion, and mouth occlusion.
The second objective of the present invention is to provide a face recognition device, which can perform fast face recognition on a face image with a blocking area, so as to improve user experience.
The second purpose of the invention is realized by adopting the following technical scheme:
a face recognition apparatus comprising: the mapping network has a mapping relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to a sample face image with an occlusion region, and the target characteristic vector corresponds to a sample face image without the occlusion region; the acquisition module is used for acquiring a characteristic vector a, wherein the characteristic vector a corresponds to a face image to be recognized with an occlusion area; the processing module is used for inputting the feature vector a into the mapping network to obtain a feature vector b; and the matching module is used for judging whether the characteristic vector b is matched with the characteristic vector c in the characteristic vector base or not, if so, outputting a recognition success signal, and the characteristic vector c corresponds to a pre-stored face image without an occlusion area.
It is a further object of the present invention to provide an electronic device for performing one of the above objects, comprising a processor, a storage medium, and a computer program, the computer program being stored in the storage medium, the computer program, when executed by the processor, implementing the above-mentioned face recognition method.
It is a fourth object of the present invention to provide a computer-readable storage medium storing one of the objects of the invention, on which a computer program is stored, which, when executed by a processor, implements the above-described face recognition method.
Compared with the prior art, the invention has the beneficial effects that: the mapping network has the mapping relation between the source characteristic vector and the target characteristic vector, so that the characteristic vector a can obtain a characteristic vector b through the mapping network, the characteristic vector b can be regarded as corresponding to a face image to be identified without an occlusion area, and then the matching step can be carried out, so that the face image with the occlusion area can be rapidly identified, and the user experience is improved; the obtained feature vector b is still matched with the feature vector c in the feature vector base, so that developers can develop on the existing face recognition technology to reduce the development difficulty.
Drawings
FIG. 1 is a flow chart of a face recognition method according to an embodiment;
FIG. 2 is a flowchart of step S10 according to the second embodiment;
FIG. 3 is a flowchart of step S30 according to the second embodiment;
FIG. 4 is a flowchart of steps S20 and S60 according to the third embodiment;
FIG. 5 is a block diagram of a face recognition apparatus according to a fourth embodiment;
fig. 6 is a block diagram of an electronic device according to the fifth embodiment.
In the figure: 1. establishing a module; 2. an acquisition module; 3. a processing module; 4. a matching module; 5. an electronic device; 51. a processor; 52. a memory; 53. an input device; 54. and an output device.
Detailed Description
The present invention will now be described in more detail with reference to the accompanying drawings, in which the description of the invention is given by way of illustration and not of limitation. The various embodiments may be combined with each other to form other embodiments not shown in the following description.
Example one
The embodiment I provides a face recognition method, which aims to solve the problem that the existing face recognition technology is difficult to recognize face images with occlusion areas. Specifically, referring to fig. 1, the face recognition method may include steps S10 to S50.
And step S10, establishing a mapping network. The mapping network has a mapping relation obtained through training, wherein the mapping relation is a relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to a sample face image with an occlusion area, and the target characteristic vector corresponds to a sample face image without the occlusion area. That is, when the mapping network is trained and discriminated, the input and output of the mapping network are both feature vectors.
And step S20, acquiring the feature vector a. The feature vector a corresponds to a face image to be recognized having an occlusion region. It should be noted that the setting rule of the shielding area is not limited herein, and may be adjusted accordingly according to the actual situation.
And step S30, inputting the feature vector a into a mapping network to obtain a feature vector b. It should be noted that the feature vector b is obtained by mapping, and therefore, it does not really correspond to the image to be recognized without occlusion, and can only be regarded as corresponding to the image to be recognized in the non-occlusion region.
Step S40, judging whether the feature vector b is matched with the feature vector c in the feature vector base, if yes, executing step S50; if not, the face recognition is finished and the next face recognition is carried out. It should be noted that, if the next face recognition is entered, the process may directly start from step S20. The feature vector base stores a feature vector c corresponding to a pre-stored face image without occlusion.
And step S50, outputting an identification success signal, and converting the person to be identified into the identified person, wherein the person is allowed to perform corresponding operation. The feature vector c matching the feature vector b is referred to as a feature vector c0, and the recognition success signal may carry the feature vector c0 and/or a pre-stored face image corresponding to the feature vector c 0. For example: when the face recognition method is applied to an access control system, and the access control system receives the recognition success signal, the access control system can be controlled to open; when the face recognition method is applied to a police system, and when the police system receives the recognition success signal, a pre-stored face image corresponding to the feature vector c0 is output.
It is worth mentioning that the steps of the method are performed on the basis of the execution device. Specifically, the execution device may be a server, a client, a processor, or the like, but the execution device is not limited to the above type.
In conclusion, the mapping network has the mapping relation between the source characteristic vector and the target characteristic vector, so that the characteristic vector a can obtain the characteristic vector b through the mapping network, the characteristic vector b can be regarded as corresponding to the facial image to be identified without the occlusion area, and then the matching step can be carried out, so that the facial image with the occlusion area can be rapidly identified, and the user experience is improved; the obtained feature vector b is still matched with the feature vector c in the feature vector base, so that developers can develop on the existing face recognition technology to reduce the development difficulty.
Example two
The present embodiment provides a face recognition method, which is performed on the basis of the first embodiment, as shown in fig. 1 and fig. 2. Specifically, the step S10 may include steps S101 to S103.
And S101, obtaining a source sample, and inputting the source sample into a face recognition model to obtain a source feature vector. Wherein the source sample is an unobstructed sample face image. It should be noted that the face recognition model may be, but is not limited to, any deep learning generation model as long as the feature vectors of the source samples can be obtained, and MobileNet-V2 is preferably used.
And S102, inputting the source sample into an occlusion model to obtain a target sample, and inputting the target sample into a face recognition model to obtain a target feature vector. The target sample is a sample face image with an occlusion region, that is, the target sample and the source sample both correspond to the same sample face image, and only the difference between the two is the occlusion region. It should be noted that the occlusion model may be, but is not limited to, any deep learning generation model as long as a sample face without an occlusion region can be occluded as required to form an occlusion region, and CycleGAN is preferably used.
And S103, training a mapping network, wherein the target characteristic vector is used as the input of the mapping network, and the source characteristic vector is used as the output of the mapping network. The mapping network thus has a mapping relationship between the source feature vector and the target feature vector. The mapping network may be, but is not limited to, any deep learning generation model as long as the mapping relationship is obtained, and preferably MobileNet-V2 is used.
By the technical scheme, the mapping network is established, so that the feature vector a corresponding to the image to be identified without the shielding area can obtain the feature vector b corresponding to the image to be identified with the shielding area according to the mapping network.
Further, the type of the occlusion area may be one, and the mapping network may only include one mapping model D, which is relatively simple and is not described herein again; when there are multiple occlusion zone types, the mapping network D may include one or more mapping models.
When the types of the occlusion areas are n, and n is more than 1, the mapping network only comprises one mapping model. It can be understood that if there is one source sample in step S101, there are n target samples in step S102, and each target sample corresponds to an occlusion region type, then the occlusion region type does not need to be determined in step S30. By doing so, steps can be reduced, but the accuracy of the mapping is not high.
When the types of the occlusion areas are n, and n is more than 1, the mapping network has n mapping models. It can be understood that if there is one source sample in step S101, there are n target samples in step S102, and each target sample corresponds to an occlusion region type, then in step S30, the occlusion region type of the feature vector a needs to be determined, and then a corresponding mapping model is selected. In this way, despite the addition of steps, each occlusion zone type has a corresponding mapping model to improve the accuracy of the mapping.
It is worth to be noted here that the occlusion region type may include any one or more combinations of left eye occlusion, right eye occlusion, nose occlusion, and mouth occlusion, and the occlusion region type is not limited to the above type, and may be added or deleted according to actual situations. The left eye shielding can be more than one half of the left eye shielding, the right eye shielding can be more than one half of the right eye shielding, the nose shielding can be more than one half of the nose shielding, and the mouth shielding can be more than one half of the mouth shielding. For example: when a person to be identified wears the mask, the mouth and more than half of the nose need to be shielded, and the person to be identified can be regarded as the combination of mouth shielding and nose shielding; when the person to be identified wears sunglasses, the left eye and the right eye need to be shielded, and the person to be identified can be regarded as the combination of the left eye shielding and the right eye shielding.
As an optional technical solution, when the occlusion regions are n and n > 1 in type, and the mapping network has n mapping models, as shown in fig. 3, step S30 may include step S301 to step S303.
And S301, inquiring the occlusion area type of the feature vector a, and recording as an occlusion area type k. Classification models may be selected for use herein. Specifically, a to-be-recognized face image with a shielding region is obtained, and then the to-be-recognized face image is input into a classification model to obtain a shielding region type k.
It should be noted that the target samples in step S102 may be adopted, the number of the target samples corresponds to the number of the occlusion region types, each target sample is used as an input of the classification model, and the occlusion region type is used as an output of the classification model. The classification model may be, but is not limited to, any one of deep learning generation models as long as the determination of the occlusion region type is available.
And S302, obtaining a related mapping model according to the occlusion region type k, and marking as a mapping model D. Since the occlusion zone types of the target samples of the same mapping model are the same during training, the occlusion zone types are associated with the mapping model.
And step S303, inputting the feature vector a into the mapping model D to obtain a feature vector b.
Through the technical scheme, the application of the single mapping model D replaces the application of the whole mapping network, so that the training efficiency is improved on one hand, and the calculation efficiency and the accuracy are improved on the other hand.
EXAMPLE III
The present embodiment provides a face recognition method, which is performed on the basis of the first embodiment or the second embodiment. Referring to fig. 1 and 4, step S20 may further include steps S201 to S204.
Step S201, receiving an image to be detected. The image to be detected can be collected by a matched camera or uploaded by other equipment, and the specific source of the image is not limited.
Step S202, inputting the image to be detected into a human face detection model to obtain a face area. The face detection model is the prior art and is not limited herein. It should be noted that the face region is the face with the highest specific gravity in the image to be detected.
Step S203, determining whether the face area has a blocking area, if yes, executing step S204. The corresponding model may be selected, or a corresponding algorithm may be adopted, all without limitation. It should be noted that the determination in this step is only a rough determination, for example: the judgment can be carried out through the area size of the shielding area, and the specific position of the shielding area does not need to be determined.
And S204, recording the face area as a face image to be recognized, inputting the face image to be recognized into a face recognition model to obtain a characteristic vector, and recording the characteristic vector as a characteristic vector a.
By the technical scheme, the input image to be detected can be preliminarily judged, and then the face area with the shielding area is correspondingly executed to improve the accuracy of the input of the mapping network model.
As an optional technical solution, referring to fig. 4, the method may further include step S60.
This step S60 is executed after determining in step S203 that there is no occlusion region in the face region. The step S60 may include steps S601 to S603.
Step S601, recording a face area without an occlusion area as a full face image.
Step S602, inputting the complete face image into a face recognition model to obtain a corresponding feature vector d;
step S603, determining whether the feature vector d matches the feature vector c in the feature vector base, if yes, executing step S604, and outputting an identification success signal. It should be noted that step S40 and step S603 may be combined into one step, and accordingly step S50 and step S604 may be combined into one step, so as to reduce the overall steps and memory.
By the technical scheme, when the face area has an occlusion area, the face area enters a mapping network and is matched with a characteristic vector c in a characteristic vector base; when the face area has no shielding area, the face area directly enters and is matched with the characteristic vector c in the characteristic vector base so as to realize the face recognition of the image to be detected, and the mapping network memory is small, so that the accuracy of the face recognition with the shielding area is improved, and the whole time consumption is not influenced.
Further, the face recognition method may further include the steps of: when the matching in step S40 fails, marking the eigenvector c with the highest matching degree and recording as eigenvector c 1; then judging whether the next matching based on the step S60 is successful, if so, marking the matched eigenvector c and recording as the eigenvector c 2; and then, judging whether the feature vector c1 is the same as the feature vector c2, if so, calling a pre-stored face image corresponding to the feature vector c, taking the pre-stored face image as a source sample, and training and updating the mapping network according to the step S10.
Example four
The fourth embodiment provides a face recognition device, which is a virtual device structure of the above embodiments and aims to solve the problem that the existing face recognition technology is difficult to recognize face images with occlusion areas. The face recognition apparatus may include: the device comprises a building module 1, an obtaining module 2, a processing module 3 and a matching module 4.
The establishing module 1 is used for establishing a mapping network, the mapping network has a mapping relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to a sample face image with an occlusion area, and the target characteristic vector corresponds to a sample face image without the occlusion area; the acquisition module 2 is used for acquiring a characteristic vector a, and the characteristic vector a corresponds to a face image to be recognized with an occlusion area; the processing module 3 is used for inputting the feature vector a into a mapping network to obtain a feature vector b; the matching module 4 is used for judging whether the characteristic vector b is matched with the characteristic vector c in the characteristic vector base, if so, outputting a recognition success signal, and the characteristic vector c corresponds to a pre-stored face image of the non-occlusion area.
EXAMPLE five
The electronic device 5 may be a desktop computer, a notebook computer, a server (a physical server or a cloud server), or even a mobile phone or a tablet computer,
fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention, and as shown in fig. 1 and fig. 6, the electronic device 5 includes a processor 51, a memory 52, an input device 53, and an output device 54; the number of the processors 51 in the computer device may be one or more, and one processor 51 is taken as an example in fig. 6; the processor 51, the memory 52, the input device 53 and the output device 54 in the electronic apparatus 5 may be connected by a bus or other means, and the bus connection is exemplified in fig. 6.
The memory 52 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the face recognition method in the embodiment of the present invention, the program instructions/modules are 1, a building module in the face recognition apparatus; 2. an acquisition module; 3. a processing module; 4. and a matching module. The processor 51 executes various functional applications and data processing of the electronic device 5 by running software programs, instructions/modules stored in the memory 52, that is, the face recognition method of any embodiment or combination of embodiments of the first to third embodiments is implemented.
The memory 52 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 52 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. The memory 52 may further be arranged to comprise memory located remotely with respect to the processor 51, which may be connected to the electronic device 5 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
It is worth mentioning that the input device 53 may be used to receive the acquired relevant data. The output device 54 may include a document or a display screen or the like. Specifically, when the output device 54 is a document, the corresponding information may be recorded in the document according to a specific format, so that data storage is realized and data integration is also realized; when the output device 54 is a display device such as a display screen, the corresponding information is directly placed on the display device so that the user can view the information in real time.
EXAMPLE six
An embodiment of the present invention further provides a computer-readable storage medium, which contains computer-executable instructions, where the computer-executable instructions are executed by a computer processor to perform the above-mentioned face recognition method, and the method includes:
establishing a mapping network, wherein the mapping network has a mapping relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to a sample face image with an occlusion region, and the target characteristic vector corresponds to a sample face image without the occlusion region;
acquiring a characteristic vector a, wherein the characteristic vector a corresponds to a face image to be recognized with an occlusion area;
inputting the feature vector a into a mapping network to obtain a feature vector b;
and judging whether the characteristic vector b is matched with the characteristic vector c in the characteristic vector base or not, if so, outputting a recognition success signal, and enabling the characteristic vector c to correspond to a pre-stored face image without an occlusion area.
Of course, the embodiments of the present invention provide a computer-readable storage medium whose computer-executable instructions are not limited to the above method operations.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, where the computer software product may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FlASH Memory (FlASH), a hard disk or an optical disk of a computer, and includes several instructions to enable an electronic device (which may be a mobile phone, a personal computer, a server, or a network device) to execute the face recognition method according to any embodiment or combination of embodiments of the first to third embodiments of the present invention.
It should be noted that, in the foregoing embodiment of face recognition, the included units and modules are merely divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be implemented. In addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
The above embodiments are only preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, and any insubstantial changes and substitutions made by those skilled in the art based on the present invention are within the protection scope of the present invention.

Claims (10)

1. A face recognition method is characterized by comprising the following steps:
establishing a mapping network, wherein the mapping network has a mapping relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to a sample face image with an occlusion region, and the target characteristic vector corresponds to a sample face image without the occlusion region;
acquiring a characteristic vector a, wherein the characteristic vector a corresponds to a face image to be recognized with an occlusion area;
inputting the feature vector a into the mapping network to obtain a feature vector b;
and judging whether the characteristic vector b is matched with a characteristic vector c in a characteristic vector base, if so, outputting a recognition success signal, wherein the characteristic vector c corresponds to a pre-stored face image of a non-shielding area.
2. The face recognition method of claim 1, wherein establishing a mapping network comprises the steps of:
acquiring a source sample, inputting the source sample into a face recognition model to obtain a source characteristic vector, wherein the source sample is a sample face image without an occlusion area;
inputting the source sample into an occlusion model to obtain a target sample, inputting the target sample into a face recognition model to obtain a target characteristic vector, wherein the target sample is a sample face image with an occlusion area;
and training a mapping network, wherein the target feature vector is used as the input of the mapping network, and the source feature vector is used as the output of the mapping network.
3. The face recognition method of claim 1, wherein obtaining the feature vector a comprises the following steps:
receiving an image to be detected;
inputting the image to be detected into a face detection model to obtain a face area;
and judging whether the face area has a shielding area, if so, recording the face area as a face image to be recognized, and inputting the face image to be recognized into a face recognition model to obtain a characteristic vector a.
4. The face recognition method of claim 3, further comprising the steps of:
when the face area has no occlusion area, recording the face area as a complete face image;
inputting the complete face image into a face recognition model to obtain a characteristic vector d;
and judging whether the feature vector d is matched with the feature vector c in the feature vector base, and if so, outputting an identification success signal.
5. The face recognition method according to any one of claims 1 to 4, wherein the mapping network has more than one mapping model, and each mapping model is respectively associated with an occlusion region type; inputting the feature vector a into the mapping network to obtain a feature vector b, comprising the following steps:
inquiring the occlusion region type k of the feature vector a;
obtaining a related mapping model D according to the occlusion region type k;
and inputting the feature vector a into the mapping model D to obtain a feature vector b.
6. The face recognition method according to claim 5, wherein the step of inquiring the occlusion region type k of the feature vector a comprises the following steps;
acquiring a face image to be recognized with a shielding area;
and inputting the facial image to be recognized into a classification model to obtain a shielding area type k.
7. The face recognition method according to claim 5, wherein the occlusion region type k comprises any one or more of left eye occlusion, right eye occlusion, nose occlusion, and mouth occlusion.
8. A face recognition apparatus, comprising:
the mapping network has a mapping relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to a sample face image with an occlusion region, and the target characteristic vector corresponds to a sample face image without the occlusion region;
the acquisition module is used for acquiring a characteristic vector a, wherein the characteristic vector a corresponds to a face image to be recognized with an occlusion area;
the processing module is used for inputting the feature vector a into the mapping network to obtain a feature vector b;
and the matching module is used for judging whether the characteristic vector b is matched with the characteristic vector c in the characteristic vector base or not, if so, outputting a recognition success signal, and the characteristic vector c corresponds to a pre-stored face image without an occlusion area.
9. An electronic device comprising a processor, a storage medium, and a computer program, the computer program being stored in the storage medium, wherein the computer program, when executed by the processor, implements the face recognition method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the face recognition method according to any one of claims 1 to 7.
CN202010567110.1A 2020-06-19 2020-06-19 Face recognition method, device, electronic equipment and medium Active CN111881740B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010567110.1A CN111881740B (en) 2020-06-19 2020-06-19 Face recognition method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010567110.1A CN111881740B (en) 2020-06-19 2020-06-19 Face recognition method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111881740A true CN111881740A (en) 2020-11-03
CN111881740B CN111881740B (en) 2024-03-22

Family

ID=73156522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010567110.1A Active CN111881740B (en) 2020-06-19 2020-06-19 Face recognition method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111881740B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613435A (en) * 2020-12-28 2021-04-06 杭州魔点科技有限公司 Face image generation method, device, equipment and medium
CN116128514A (en) * 2022-11-28 2023-05-16 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention
WO2023241817A1 (en) * 2022-06-15 2023-12-21 Veridas Digital Authentication Solutions, S.L. Authenticating a person

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095856A (en) * 2015-06-26 2015-11-25 上海交通大学 Method for recognizing human face with shielding based on mask layer
CN105139000A (en) * 2015-09-16 2015-12-09 浙江宇视科技有限公司 Face recognition method and device enabling glasses trace removal
US20160132720A1 (en) * 2014-11-07 2016-05-12 Noblis, Inc. Vector-based face recognition algorithm and image search system
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN107292287A (en) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
CN107633204A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN107909065A (en) * 2017-12-29 2018-04-13 百度在线网络技术(北京)有限公司 The method and device blocked for detecting face
CN109886167A (en) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 One kind blocking face identification method and device
CN109960975A (en) * 2017-12-23 2019-07-02 四川大学 A kind of face generation and its face identification method based on human eye
CN110363047A (en) * 2018-03-26 2019-10-22 普天信息技术有限公司 Method, apparatus, electronic equipment and the storage medium of recognition of face
CN110363091A (en) * 2019-06-18 2019-10-22 广州杰赛科技股份有限公司 Face identification method, device, equipment and storage medium in the case of side face
CN110751009A (en) * 2018-12-20 2020-02-04 北京嘀嘀无限科技发展有限公司 Face recognition method, target recognition device and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160132720A1 (en) * 2014-11-07 2016-05-12 Noblis, Inc. Vector-based face recognition algorithm and image search system
CN105095856A (en) * 2015-06-26 2015-11-25 上海交通大学 Method for recognizing human face with shielding based on mask layer
CN105139000A (en) * 2015-09-16 2015-12-09 浙江宇视科技有限公司 Face recognition method and device enabling glasses trace removal
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN107292287A (en) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
WO2019033572A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Method for detecting whether face is blocked, device and storage medium
CN107633204A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN109960975A (en) * 2017-12-23 2019-07-02 四川大学 A kind of face generation and its face identification method based on human eye
CN107909065A (en) * 2017-12-29 2018-04-13 百度在线网络技术(北京)有限公司 The method and device blocked for detecting face
CN110363047A (en) * 2018-03-26 2019-10-22 普天信息技术有限公司 Method, apparatus, electronic equipment and the storage medium of recognition of face
CN110751009A (en) * 2018-12-20 2020-02-04 北京嘀嘀无限科技发展有限公司 Face recognition method, target recognition device and electronic equipment
CN109886167A (en) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 One kind blocking face identification method and device
CN110363091A (en) * 2019-06-18 2019-10-22 广州杰赛科技股份有限公司 Face identification method, device, equipment and storage medium in the case of side face

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613435A (en) * 2020-12-28 2021-04-06 杭州魔点科技有限公司 Face image generation method, device, equipment and medium
WO2023241817A1 (en) * 2022-06-15 2023-12-21 Veridas Digital Authentication Solutions, S.L. Authenticating a person
CN116128514A (en) * 2022-11-28 2023-05-16 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention
CN116128514B (en) * 2022-11-28 2023-10-13 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention

Also Published As

Publication number Publication date
CN111881740B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN109697416B (en) Video data processing method and related device
US10832069B2 (en) Living body detection method, electronic device and computer readable medium
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
CN105518712B (en) Keyword notification method and device based on character recognition
CN111914812B (en) Image processing model training method, device, equipment and storage medium
CN111523413B (en) Method and device for generating face image
CN112381104B (en) Image recognition method, device, computer equipment and storage medium
CN109670444B (en) Attitude detection model generation method, attitude detection device, attitude detection equipment and attitude detection medium
KR20200098875A (en) System and method for providing 3D face recognition
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
CN111881740B (en) Face recognition method, device, electronic equipment and medium
CN115699082A (en) Defect detection method and device, storage medium and electronic equipment
CN112101123B (en) Attention detection method and device
CN111428570A (en) Detection method and device for non-living human face, computer equipment and storage medium
CN114723646A (en) Image data generation method with label, device, storage medium and electronic equipment
CN111783674A (en) Face recognition method and system based on AR glasses
EP4459575A1 (en) Liveness detection method, device and apparatus, and storage medium
CN111241873A (en) Image reproduction detection method, training method of model thereof, payment method and payment device
KR102440198B1 (en) VIDEO SEARCH METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
CN112001285A (en) Method, device, terminal and medium for processing beautifying image
CN114565955B (en) Face attribute identification model training, community personnel monitoring method, device and equipment
CN111428740A (en) Detection method and device for network-shot photo, computer equipment and storage medium
CN113052025B (en) Training method of image fusion model, image fusion method and electronic equipment
CN112464827B (en) Mask wearing recognition method, device, equipment and storage medium
CN110619602B (en) Image generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant