CN114266946A - Feature identification method and device under shielding condition, computer equipment and medium - Google Patents
Feature identification method and device under shielding condition, computer equipment and medium Download PDFInfo
- Publication number
- CN114266946A CN114266946A CN202111676900.4A CN202111676900A CN114266946A CN 114266946 A CN114266946 A CN 114266946A CN 202111676900 A CN202111676900 A CN 202111676900A CN 114266946 A CN114266946 A CN 114266946A
- Authority
- CN
- China
- Prior art keywords
- shielding
- feature
- image
- features
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000004927 fusion Effects 0.000 claims abstract description 61
- 238000000605 extraction Methods 0.000 claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 28
- 238000007781 pre-processing Methods 0.000 claims abstract description 17
- 238000012549 training Methods 0.000 claims description 67
- 238000004590 computer program Methods 0.000 claims description 17
- 238000005520 cutting process Methods 0.000 claims description 15
- 238000003860 storage Methods 0.000 claims description 13
- 238000005096 rolling process Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 9
- 230000006835 compression Effects 0.000 claims description 8
- 238000007906 compression Methods 0.000 claims description 8
- 238000010586 diagram Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 7
- 230000015572 biosynthetic process Effects 0.000 claims 1
- 238000003786 synthesis reaction Methods 0.000 claims 1
- 230000006870 function Effects 0.000 description 14
- 208000006440 Open Bite Diseases 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a feature recognition method and device under a shielding condition and related equipment, and is applied to the field of biological feature recognition. The method provided by the invention comprises the following steps: the method comprises the steps of carrying out image preprocessing and alignment processing on a shielding image to obtain an aligned image, inputting the aligned image into a trained shielding network, carrying out public feature extraction and shielding region feature extraction on the aligned image by adopting the trained shielding network to obtain public features and shielding region features, carrying out feature fusion on the public features and the shielding region features to obtain fusion features, inputting the fusion features into an identification network to obtain an identification result of the shielding features, and identifying the obtained fusion features based on the extraction and fusion of the public features and the shielding region features, so that the accuracy of shielding feature identification is improved.
Description
Technical Field
The present invention relates to the field of biometric feature recognition, and in particular, to a method and an apparatus for feature recognition under occlusion conditions, a computer device, and a storage medium.
Background
Occlusion presents a huge challenge to robust biometric identification methods in practical applications. For example, in face recognition, a face image input by a face recognition system is very easily shielded by objects such as a mask, sunglasses, a hat, hands and the like, so that a face region is partially shielded, and the stability and the applicability of the system performance are further influenced. Particularly, under the large environment of new crown epidemic situations, the mask is a very common scene. How to promote the identification of biological characteristics under the occlusion is a problem to be solved.
In a biological recognition system in a shielding environment, the traditional research technology comprises a subspace regression method, a local feature analysis and other related algorithms, but the recognition accuracy is not high. The deep learning method becomes a better solution for improving the identification of the occlusion biological characteristics due to strong learning ability and the ability of automatically selecting the characteristics, and the current deep learning-based methods can be divided into two categories:
1) the identification is performed using an occlusion model and a non-occlusion model. Firstly, screening out an occlusion image by an occlusion judging module. For normal images, a separate conventional model is used for reasoning identification. And for the occluded image, the non-occluded area is positioned and cut out, and the non-occluded area is input into another independent occlusion model for identification.
2) The recognition is performed using an image restoration method. And (4) performing complete restoration or characteristic restoration on information loss caused by shielding to enhance the continuity of the content.
Although the traditional method is simple, the improvement precision is limited, and based on the deep learning method, 1) a plurality of models are used for identification, which not only increases the calculation cost, but also affects the performance by the shielding judgment module. 2) Based on the image restoration method, the intrinsic restoration information is pseudo information, which makes the stability of the model poor and the recognition accuracy low. In conclusion, the existing occlusion biometric feature identification technology has the technical defects of high calculation cost and low precision.
Disclosure of Invention
The invention provides a feature identification method and device under an occlusion condition, computer equipment and a storage medium, which aim to improve identification accuracy of occlusion features.
A feature identification method under an occlusion condition includes:
carrying out image preprocessing and alignment processing on the shielding image to obtain an aligned image;
inputting the aligned image into a trained shielding network, and performing common feature extraction and shielding area feature extraction on the aligned image by adopting the trained shielding network to obtain common features and shielding area features;
performing feature fusion on the public features and the shielding region features to obtain fusion features;
and inputting the fusion characteristics into an identification network to obtain an identification result of the shielding characteristics.
Optionally, the trained occlusion network includes a common feature extraction layer, an occlusion region supervision segmentation layer, and an identification layer;
the common feature extraction layer comprises a feature compression module and a feature expansion module, wherein the feature compression module adopts a residual rolling block based on ResNet, the residual rolling block comprises N rolling blocks, and the feature expansion module comprises N corresponding anti-rolling blocks;
the shielding region supervision and segmentation layer consists of at least three continuous convolution layers and a Sigmoid layer;
the identification layer is composed of at least three continuous convolution layers, a full connection layer and a Softmax layer.
Optionally, the performing feature fusion on the common feature and the occlusion region feature to obtain a fusion feature includes:
performing spatial feature selection on the output feature map of the public features through exponential multiplication operation to obtain enhanced features;
and performing feature fusion by adopting the enhanced features and the shielded region features to obtain the fusion features.
Optionally, the performing image preprocessing and alignment processing on the occlusion image to obtain an aligned image includes:
carrying out target detection on the shielding image and determining a range to be detected;
image cutting is carried out according to the range to be detected to obtain a cut image;
and carrying out alignment processing on the target object in the cutting image, and carrying out the alignment image.
Optionally, before the inputting the aligned image to the trained occlusion network, the method further comprises:
acquiring an initial shielding network;
acquiring initial training data, performing data preprocessing on the initial training data, and performing data enhancement on the preprocessed data to obtain target training data;
inputting the target training data into the initial shielding network for training and recognition, and determining a training and recognition result based on positive and negative samples in the target training data;
adjusting parameters in the initial shielding network based on the training recognition result, returning to the step of inputting the target training data into the initial shielding network for training recognition, and determining the training recognition result based on positive and negative samples in the target training data, and continuing to execute the step until the training times or the recognition accuracy reaches a preset condition;
and using the obtained shielding network as the trained shielding network.
A feature recognition device under occlusion conditions, comprising:
the image processing module is used for carrying out image preprocessing and alignment processing on the shielding image to obtain an aligned image;
the feature extraction module is used for inputting the aligned images into a trained shielding network, and performing common feature extraction and shielding area feature extraction on the aligned images by adopting the trained shielding network to obtain common features and shielding area features;
the characteristic fusion module is used for carrying out characteristic fusion on the public characteristic and the shielding region characteristic to obtain a fusion characteristic;
and the shielding identification module is used for inputting the fusion characteristics into an identification network to obtain an identification result of the shielding characteristics.
Optionally, the feature fusion module includes:
the characteristic enhancement unit is used for carrying out spatial characteristic selection on the output characteristic diagram of the public characteristic through exponential multiplication operation to obtain an enhanced characteristic;
and the feature fusion unit is used for performing feature fusion by adopting the enhanced features and the shielding region features to obtain the fusion features.
Optionally, the image processing module comprises:
the target detection unit is used for carrying out target detection on the shielding image and determining a range to be detected;
the image cutting unit is used for cutting the image according to the range to be detected to obtain a cut image;
and the alignment processing unit is used for carrying out alignment processing on the target object in the cutting image and is provided with the alignment image.
Optionally, the feature recognition device under the occlusion condition further includes:
the initial network acquisition module is used for acquiring an initial shielding network;
the data enhancement module is used for acquiring initial training data, carrying out data preprocessing on the initial training data and carrying out data enhancement on the preprocessed data to obtain target training data;
the training identification module is used for inputting the target training data into the initial shielding network for training identification, and determining a training identification result based on positive and negative samples in the target training data;
the iteration module is used for adjusting parameters in the initial shielding network based on the training recognition result, returning the target training data to be input into the initial shielding network for training recognition, and determining the training recognition result based on positive and negative samples in the target training data to continue to execute until the training times or the recognition accuracy reaches a preset condition;
and the network model determining module is used for taking the obtained shielding network as the trained shielding network.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method for feature recognition in occlusion conditions as described above when executing the computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method of feature recognition in occlusion conditions.
According to the feature recognition method under the shielding condition, the face recognition method, the face recognition device, the computer equipment and the storage medium, the alignment image is obtained by performing image preprocessing and alignment processing on the shielding image, the alignment image is input into the trained shielding network, common feature extraction and shielding region feature extraction are performed on the alignment image by adopting the trained shielding network, common features and shielding region features are obtained, feature fusion is performed on the common features and the shielding region features to obtain fusion features, the fusion features are input into the recognition network to obtain the recognition result of the shielding features, and the obtained fusion features are recognized based on the extraction and fusion of the common features and the shielding region features, so that the accuracy of biological feature recognition is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of a feature recognition method under an occlusion condition according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for feature recognition under occlusion conditions in accordance with an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a feature recognition apparatus under an occlusion condition according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The feature recognition method under the occlusion condition can be applied to the application environment shown in fig. 1, wherein the terminal device communicates with the server through a network. The terminal device may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
The system framework 100 may include terminal devices, networks, and servers. The network serves as a medium for providing a communication link between the terminal device and the server. The network may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use a terminal device to interact with a server over a network to receive or send messages or the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture experts Group Audio Layer III, motion Picture experts compression standard Audio Layer 3), MP4 players (Moving Picture experts Group Audio Layer IV, motion Picture experts compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that the feature identification method under the occlusion condition provided by the embodiment of the present invention is executed by a server, and accordingly, the feature identification device under the occlusion condition is disposed in the server.
It should be understood that the number of the terminal devices, the networks, and the servers in fig. 1 is only illustrative, and any number of the terminal devices, the networks, and the servers may be provided according to implementation requirements, and the terminal devices in the embodiment of the present invention may specifically correspond to an application system in actual production.
In an embodiment, as shown in fig. 2, a method for feature recognition under an occlusion condition is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps S201 to S204.
S201: and carrying out image preprocessing and alignment processing on the shielding image to obtain an aligned image.
Optionally, the performing image preprocessing and alignment processing on the occlusion image to obtain an aligned image includes:
carrying out target detection on the shielding image and determining a range to be detected;
image cutting is carried out according to the range to be detected to obtain a cut image;
and carrying out alignment processing on the target object in the cutting image, and carrying out the alignment image.
S202: and inputting the aligned image into a trained shielding network, and performing common feature extraction and shielding region feature extraction on the aligned image by adopting the trained shielding network to obtain common features and shielding region features.
Optionally, the trained occlusion network includes a common feature extraction layer, an occlusion region supervision segmentation layer, and an identification layer;
the common feature extraction layer comprises a feature compression module and a feature expansion module, wherein the feature compression module adopts a residual rolling block based on ResNet, the residual rolling block comprises N rolling blocks, and the feature expansion module comprises N corresponding anti-rolling blocks;
the shielding region supervision and segmentation layer consists of at least three continuous convolution layers and a Sigmoid layer;
the identification layer is composed of at least three continuous convolution layers, a full connection layer and a Softmax layer.
Specifically, the common feature extraction is to input the image after data enhancement to a common feature extraction network of a U-type network formed by a residual convolution module for feature extraction, so as to generate a feature map. The feature map extracted by the public symmetrical U-shaped feature network not only retains local fine features, but also integrates high-level semantic information. Sufficient distinguishing information is provided for subsequent effective segmentation of the non-occluded area and subsequent identification thereof.
And learning a non-occlusion region extraction network module, wherein the non-occlusion region extraction network module consists of three continuous convolution layers and a Sigmod layer, and outputs a single-channel spatial Mask diagram for indicating whether the target region is occluded or not. The used supervision labels are generated by the generation operation of the shielding areas, and the network module is trained by optimizing a cross entropy loss function by adopting a random gradient descent algorithm. In the learning process of the network module, a supervision signal consisting of supervision labels is always used for guiding, and the module can stably conduct convergence learning.
Further, before step S202, the method further includes:
acquiring an initial shielding network;
acquiring initial training data, performing data preprocessing on the initial training data, and performing data enhancement on the preprocessed data to obtain target training data;
inputting the target training data into the initial shielding network for training and recognition, and determining a training and recognition result based on positive and negative samples in the target training data;
adjusting parameters in the initial shielding network based on the training recognition result, returning to the step of inputting the target training data into the initial shielding network for training recognition, and determining the training recognition result based on positive and negative samples in the target training data, and continuing to execute the step until the training times or the recognition accuracy reaches a preset condition;
and using the obtained shielding network as the trained shielding network.
The data enhancement is specifically random data enhancement, and the learning sample number of each target is expanded. The data enhancement mode comprises a certain degree of zooming, translation, rotation operation and color adjustment operation. Meanwhile, the enhancement operation of automatically generating the shielding area for the target is adopted, such as operations of wearing a mask and sunglasses on the face of a person. And the generated Mask area is used as a non-occlusion area to extract a supervision signal learned by the network module.
S203: and performing feature fusion on the public features and the shielding region features to obtain fusion features.
Specifically, the extraction result of the non-occlusion region is acted on the output characteristic diagram of the public characteristic module through the exponential multiplication operation to perform spatial characteristic selection, the exponential operation strengthens the target characteristic region of the non-occlusion region, and simultaneously weakens the occlusion region instead of completely neglecting the occlusion region, so that the fault tolerance and the effectiveness of fusion are further enhanced.
Optionally, the performing feature fusion on the common feature and the occlusion region feature to obtain a fusion feature includes:
performing spatial feature selection on the output feature map of the public features through exponential multiplication operation to obtain enhanced features;
and performing feature fusion by adopting the enhanced features and the shielded region features to obtain the fusion features.
S204: and inputting the fusion characteristics into an identification network to obtain an identification result of the shielding characteristics.
In this embodiment, the recognition network module is composed of three consecutive convolutional layers with a pooling operation, a full connection layer and a Softmax layer. Through posing operation, the number of channels of the feature map is increased, and the implied function of expanding the visual field can extract the high-level semantics of the target. And finally, optimizing a Softmax loss function by adopting a random gradient descent algorithm to obtain the trained network module parameters.
In the embodiment, the method includes the steps of performing image preprocessing and alignment processing on a shielding image to obtain an aligned image, inputting the aligned image into a trained shielding network, performing common feature extraction and shielding region feature extraction on the aligned image by using the trained shielding network to obtain common features and shielding region features, performing feature fusion on the common features and the shielding region features to obtain fusion features, inputting the fusion features into an identification network to obtain an identification result of the shielding features, and identifying the obtained fusion features based on the extraction and fusion of the common features and the shielding region features, so that the accuracy of biological identification is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, a feature recognition device under an occlusion condition is provided, and the feature recognition device under the occlusion condition corresponds to the feature recognition method under the occlusion condition in the above embodiment one to one. As shown in fig. 3, the feature recognition apparatus under the occlusion condition includes an image processing module 31, a feature extraction module 32, a feature fusion module 33, and an occlusion recognition module 34, which are described in detail as follows:
the image processing module 31 is configured to perform image preprocessing and alignment processing on the occlusion image to obtain an aligned image;
the feature extraction module 32 is configured to input the aligned image into the trained occlusion network, and perform common feature extraction and occlusion region feature extraction on the aligned image by using the trained occlusion network to obtain a common feature and an occlusion region feature;
the feature fusion module 33 is configured to perform feature fusion on the public feature and the occlusion region feature to obtain a fusion feature;
and the occlusion identification module 34 is configured to input the fusion feature into an identification network to obtain an identification result of the occlusion feature.
Optionally, the feature fusion module 33 includes:
the characteristic enhancement unit is used for carrying out spatial characteristic selection on the output characteristic diagram of the public characteristic through exponential multiplication operation to obtain an enhanced characteristic;
and the characteristic fusion unit is used for performing characteristic fusion by adopting the enhanced characteristic and the shielding region characteristic to obtain a fusion characteristic.
Optionally, the image processing module 31 comprises:
the target detection unit is used for carrying out target detection on the shielding image and determining a range to be detected;
the image cutting unit is used for cutting the image according to the range to be detected to obtain a cut image;
and the alignment processing unit is used for carrying out alignment processing on the target object in the cutting image and is provided with an alignment image.
Optionally, the feature recognition device under the occlusion condition further includes:
the initial network acquisition module is used for acquiring an initial shielding network;
the data enhancement module is used for acquiring initial training data, carrying out data preprocessing on the initial training data and carrying out data enhancement on the preprocessed data to obtain target training data;
the training identification module is used for inputting target training data into an initial shielding network for training identification and determining a training identification result based on positive and negative samples in the target training data;
the iteration module is used for adjusting parameters in the initial shielding network based on the training recognition result, inputting target training data into the initial shielding network for training recognition in a return mode, and determining the step of training recognition result based on positive and negative samples in the target training data to be continuously executed until the training times or the recognition accuracy reaches a preset condition;
and the network model determining module is used for taking the obtained occlusion network as a trained occlusion network.
Wherein the meaning of "first" and "second" in the above modules/units is only to distinguish different modules/units, and is not used to define which module/unit has higher priority or other defining meaning. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not explicitly listed or inherent to such process, method, article, or apparatus, and such that a division of modules presented in this application is merely a logical division and may be implemented in a practical application in a further manner.
For the specific definition of the feature recognition device under the occlusion condition, the above definition of the feature recognition method under the occlusion condition can be referred to, and details are not repeated here. All or part of each module in the feature recognition device under the shielding condition can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data involved in the feature recognition method under occlusion conditions. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of feature recognition under occlusion conditions.
In one embodiment, a computer device is provided, which includes a memory, a processor and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the steps of the feature recognition method under occlusion conditions in the above-described embodiments are implemented, for example, steps S201 to S204 shown in fig. 2 and other extensions of the method and extensions of related steps. Alternatively, the processor, when executing the computer program, implements the functions of the modules/units of the feature recognition apparatus under the occlusion condition in the above-described embodiments, such as the functions of the modules 31 to 34 shown in fig. 3. To avoid repetition, further description is omitted here.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like which is the control center for the computer device and which connects the various parts of the overall computer device using various interfaces and lines.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the computer device by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the cellular phone, etc.
The memory may be integrated in the processor or may be provided separately from the processor.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the steps of the feature recognition method in the occlusion condition of the above-described embodiments, such as the steps S201 to S204 shown in fig. 2 and extensions of other extensions and related steps of the method. Alternatively, the computer program, when executed by the processor, implements the functions of the modules/units of the feature recognition apparatus in the occlusion condition in the above-described embodiments, such as the functions of the modules 31 to 34 shown in fig. 3. To avoid repetition, further description is omitted here.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
Claims (10)
1. A method for feature recognition under occlusion conditions, the method comprising:
carrying out image preprocessing and alignment processing on the shielding image to obtain an aligned image;
inputting the aligned image into a trained shielding network, and performing common feature extraction and shielding area feature extraction on the aligned image by adopting the trained shielding network to obtain common features and shielding area features;
performing feature fusion on the public features and the shielding region features to obtain fusion features;
and inputting the fusion characteristics into an identification network to obtain an identification result of the shielding characteristics.
2. The method for feature recognition under occlusion condition of claim 1, wherein the trained occlusion network comprises a common feature extraction layer, an occlusion region supervision segmentation layer and a recognition layer;
the common feature extraction layer comprises a feature compression module and a feature expansion module, wherein the feature compression module adopts a residual rolling block based on ResNet, the residual rolling block comprises N rolling blocks, and the feature expansion module comprises N corresponding anti-rolling blocks;
the shielding region supervision and segmentation layer consists of at least three continuous convolution layers and a Sigmoid layer;
the identification layer is composed of at least three continuous convolution layers, a full connection layer and a Softmax layer.
3. The method for feature recognition under occlusion condition according to claim 1, wherein the feature fusion of the common feature and the occlusion region feature to obtain a fused feature comprises:
performing spatial feature selection on the output feature map of the public features through exponential multiplication operation to obtain enhanced features;
and performing feature fusion by adopting the enhanced features and the shielded region features to obtain the fusion features.
4. The method for feature recognition under occlusion condition according to claim 1, wherein the image preprocessing and the alignment processing on the occlusion image to obtain an aligned image comprises:
carrying out target detection on the shielding image and determining a range to be detected;
image cutting is carried out according to the range to be detected to obtain a cut image;
and aligning the target object in the cutting image to obtain the aligned image.
5. The method of feature recognition under occlusion condition of any of claims 1 to 4, wherein before the inputting the aligned images into the trained occlusion network, the method further comprises:
acquiring an initial shielding network;
acquiring initial training data, performing data preprocessing on the initial training data, and performing data enhancement on the preprocessed data, including occlusion image synthesis, to obtain target training data;
inputting the target training data into the initial shielding network for training and recognition, and determining a training and recognition result based on positive and negative samples in the target training data;
adjusting parameters in the initial shielding network based on the training recognition result, returning to the step of inputting the target training data into the initial shielding network for training recognition, and determining the training recognition result based on positive and negative samples in the target training data, and continuing to execute the step until the training times or the recognition accuracy reaches a preset condition;
and using the obtained shielding network as the trained shielding network.
6. A feature recognition device under a shielding condition, comprising:
the image processing module is used for carrying out image preprocessing and alignment processing on the shielding image to obtain an aligned image;
the feature extraction module is used for inputting the aligned images into a trained shielding network, and performing common feature extraction and shielding area feature extraction on the aligned images by adopting the trained shielding network to obtain common features and shielding area features;
the characteristic fusion module is used for carrying out characteristic fusion on the public characteristic and the shielding region characteristic to obtain a fusion characteristic;
and the shielding identification module is used for inputting the fusion characteristics into an identification network to obtain an identification result of the shielding characteristics.
7. The device for feature recognition under occlusion condition of claim 6, wherein the feature fusion module comprises:
the characteristic enhancement unit is used for carrying out spatial characteristic selection on the output characteristic diagram of the public characteristic through exponential multiplication operation to obtain an enhanced characteristic;
and the feature fusion unit is used for performing feature fusion by adopting the enhanced features and the shielding region features to obtain the fusion features.
8. The device for recognizing features under occlusion condition according to claim 6, wherein the image processing module comprises:
the target detection unit is used for carrying out target detection on the shielding image and determining a range to be detected;
the image cutting unit is used for cutting the image according to the range to be detected to obtain a cut image;
and the alignment processing unit is used for carrying out alignment processing on the target object in the cutting image and is provided with the alignment image.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor when executing the computer program realizes the steps of the method for feature recognition in occlusion conditions according to any of claims 1 to 5.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method for feature recognition in occlusion conditions according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111676900.4A CN114266946A (en) | 2021-12-31 | 2021-12-31 | Feature identification method and device under shielding condition, computer equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111676900.4A CN114266946A (en) | 2021-12-31 | 2021-12-31 | Feature identification method and device under shielding condition, computer equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114266946A true CN114266946A (en) | 2022-04-01 |
Family
ID=80832416
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111676900.4A Pending CN114266946A (en) | 2021-12-31 | 2021-12-31 | Feature identification method and device under shielding condition, computer equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114266946A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114693950A (en) * | 2022-04-22 | 2022-07-01 | 北京百度网讯科技有限公司 | Training method and device for image feature extraction network and electronic equipment |
CN115631509A (en) * | 2022-10-24 | 2023-01-20 | 智慧眼科技股份有限公司 | Pedestrian re-identification method and device, computer equipment and storage medium |
CN116563926A (en) * | 2023-05-17 | 2023-08-08 | 智慧眼科技股份有限公司 | Face recognition method, system, equipment and computer readable storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108805828A (en) * | 2018-05-22 | 2018-11-13 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer equipment and storage medium |
CN110659582A (en) * | 2019-08-29 | 2020-01-07 | 深圳云天励飞技术有限公司 | Image conversion model training method, heterogeneous face recognition method, device and equipment |
CN110738153A (en) * | 2019-09-30 | 2020-01-31 | 汉王科技股份有限公司 | Heterogeneous face image conversion method and device, electronic equipment and storage medium |
CN111310718A (en) * | 2020-03-09 | 2020-06-19 | 成都川大科鸿新技术研究所 | High-accuracy detection and comparison method for face-shielding image |
CN111539263A (en) * | 2020-04-02 | 2020-08-14 | 江南大学 | Video face recognition method based on aggregation countermeasure network |
CN111814741A (en) * | 2020-07-28 | 2020-10-23 | 四川通信科研规划设计有限责任公司 | Method for detecting embryo-sheltered pronucleus and blastomere based on attention mechanism |
CN111915693A (en) * | 2020-05-22 | 2020-11-10 | 中国科学院计算技术研究所 | Sketch-based face image generation method and system |
CN111914628A (en) * | 2020-06-19 | 2020-11-10 | 北京百度网讯科技有限公司 | Training method and device of face recognition model |
CN113392699A (en) * | 2021-04-30 | 2021-09-14 | 深圳市安思疆科技有限公司 | Multi-label deep convolution neural network method and device for face occlusion detection and electronic equipment |
CN113609900A (en) * | 2021-06-25 | 2021-11-05 | 南京信息工程大学 | Local generation face positioning method and device, computer equipment and storage medium |
-
2021
- 2021-12-31 CN CN202111676900.4A patent/CN114266946A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108805828A (en) * | 2018-05-22 | 2018-11-13 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer equipment and storage medium |
CN110659582A (en) * | 2019-08-29 | 2020-01-07 | 深圳云天励飞技术有限公司 | Image conversion model training method, heterogeneous face recognition method, device and equipment |
CN110738153A (en) * | 2019-09-30 | 2020-01-31 | 汉王科技股份有限公司 | Heterogeneous face image conversion method and device, electronic equipment and storage medium |
CN111310718A (en) * | 2020-03-09 | 2020-06-19 | 成都川大科鸿新技术研究所 | High-accuracy detection and comparison method for face-shielding image |
CN111539263A (en) * | 2020-04-02 | 2020-08-14 | 江南大学 | Video face recognition method based on aggregation countermeasure network |
CN111915693A (en) * | 2020-05-22 | 2020-11-10 | 中国科学院计算技术研究所 | Sketch-based face image generation method and system |
CN111914628A (en) * | 2020-06-19 | 2020-11-10 | 北京百度网讯科技有限公司 | Training method and device of face recognition model |
CN111814741A (en) * | 2020-07-28 | 2020-10-23 | 四川通信科研规划设计有限责任公司 | Method for detecting embryo-sheltered pronucleus and blastomere based on attention mechanism |
CN113392699A (en) * | 2021-04-30 | 2021-09-14 | 深圳市安思疆科技有限公司 | Multi-label deep convolution neural network method and device for face occlusion detection and electronic equipment |
CN113609900A (en) * | 2021-06-25 | 2021-11-05 | 南京信息工程大学 | Local generation face positioning method and device, computer equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
杨露菁等: "《智能图像处理及应用》", 31 March 2019, 中国铁道出版社 * |
董洪义: "《深度学习之PyTorch物体检测实战》", 31 January 2020, 机械工业出版社 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114693950A (en) * | 2022-04-22 | 2022-07-01 | 北京百度网讯科技有限公司 | Training method and device for image feature extraction network and electronic equipment |
CN114693950B (en) * | 2022-04-22 | 2023-08-25 | 北京百度网讯科技有限公司 | Training method and device of image feature extraction network and electronic equipment |
CN115631509A (en) * | 2022-10-24 | 2023-01-20 | 智慧眼科技股份有限公司 | Pedestrian re-identification method and device, computer equipment and storage medium |
CN115631509B (en) * | 2022-10-24 | 2023-05-26 | 智慧眼科技股份有限公司 | Pedestrian re-identification method and device, computer equipment and storage medium |
CN116563926A (en) * | 2023-05-17 | 2023-08-08 | 智慧眼科技股份有限公司 | Face recognition method, system, equipment and computer readable storage medium |
CN116563926B (en) * | 2023-05-17 | 2024-03-01 | 智慧眼科技股份有限公司 | Face recognition method, system, equipment and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11710293B2 (en) | Target detection method and apparatus, computer-readable storage medium, and computer device | |
US10936919B2 (en) | Method and apparatus for detecting human face | |
CN114266946A (en) | Feature identification method and device under shielding condition, computer equipment and medium | |
US20200334830A1 (en) | Method, apparatus, and storage medium for processing video image | |
CN112926654B (en) | Pre-labeling model training and certificate pre-labeling method, device, equipment and medium | |
JP7425147B2 (en) | Image processing method, text recognition method and device | |
CN114550241B (en) | Face recognition method and device, computer equipment and storage medium | |
CN112364799A (en) | Gesture recognition method and device | |
CN113487610B (en) | Herpes image recognition method and device, computer equipment and storage medium | |
JP2023527615A (en) | Target object detection model training method, target object detection method, device, electronic device, storage medium and computer program | |
CN113496208B (en) | Video scene classification method and device, storage medium and terminal | |
CN111898561A (en) | Face authentication method, device, equipment and medium | |
CN112001399A (en) | Image scene classification method and device based on local feature saliency | |
CN112016502B (en) | Safety belt detection method, safety belt detection device, computer equipment and storage medium | |
CN113591751A (en) | Transformer substation abnormal condition warning method and device, computer equipment and storage medium | |
CN111931707A (en) | Face image prediction method, device, equipment and medium based on countercheck patch | |
CN113887527A (en) | Face image processing method and device, computer equipment and storage medium | |
CN116129881A (en) | Voice task processing method and device, electronic equipment and storage medium | |
WO2022257433A1 (en) | Processing method and apparatus for feature map of image, storage medium, and terminal | |
CN113011132A (en) | Method and device for identifying vertically arranged characters, computer equipment and storage medium | |
CN114093027A (en) | Dynamic gesture recognition method and device based on convolutional neural network and readable medium | |
CN113963417A (en) | Face attribute recognition method, terminal and storage medium | |
CN111291186A (en) | Context mining method and device based on clustering algorithm and electronic equipment | |
CN114241537B (en) | Finger vein image authenticity identification method and device, computer equipment and storage medium | |
CN113256660B (en) | Picture processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220401 |