[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111209850B - Method for generating applicable multi-device identification finger vein image based on improved cGAN network - Google Patents

Method for generating applicable multi-device identification finger vein image based on improved cGAN network Download PDF

Info

Publication number
CN111209850B
CN111209850B CN202010007624.1A CN202010007624A CN111209850B CN 111209850 B CN111209850 B CN 111209850B CN 202010007624 A CN202010007624 A CN 202010007624A CN 111209850 B CN111209850 B CN 111209850B
Authority
CN
China
Prior art keywords
image
generator
finger vein
training
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010007624.1A
Other languages
Chinese (zh)
Other versions
CN111209850A (en
Inventor
张烜
赵国栋
任湘
李学双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Holy Point Century Technology Co ltd
Original Assignee
Holy Point Century Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Holy Point Century Technology Co ltd filed Critical Holy Point Century Technology Co ltd
Priority to CN202010007624.1A priority Critical patent/CN111209850B/en
Publication of CN111209850A publication Critical patent/CN111209850A/en
Application granted granted Critical
Publication of CN111209850B publication Critical patent/CN111209850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • G06V40/1318Sensors therefor using electro-optical elements or layers, e.g. electroluminescent sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention relates to a method for generating an applicable multi-device identification finger vein image based on an improved cGAN network, which comprises the following steps: 1) acquiring or selecting a plurality of image samples X0(ii) a 2) For image sample X0Segmenting the vein part, and acquiring a background binary image X of the veinbin(ii) a 3) From the binary image XbinThe size and the format of the cGAN are matched with an improved cGAN network structure; 4) training an improved cGAN network, and training and updating a corresponding network training parameter value according to the acquired data volume; 5) dividing and thinning the binary image X of the fine line of the finger veinbinAnd inputting the image into a trained improved cGAN network, and generating a finger vein image for finger vein registration identification. The method and the device effectively solve the problem of reduced finger vein identification performance of image quality change caused by finger veins in different equipment states by designing and improving a cGAN network structure, and describing a finger vein gray image training generator and a discriminator D according to an input binary image by a generator G to realize the updating of parameters.

Description

Method for generating applicable multi-device identification finger vein image based on improved cGAN network
Technical Field
The invention belongs to the technical field of identity recognition and verification of information security, and particularly relates to a method for generating a vein image suitable for multi-device recognition based on an improved cGAN network.
Background
Finger vein recognition is a leading-edge biological feature recognition technology, utilizes infrared images of finger veins to carry out identity recognition, and has the three characteristics of biological living body recognition, in-vivo feature recognition and non-contact recognition. The finger vein recognition ensures that the finger vein of a person to be recognized is difficult to forge, so that the finger vein recognition system has high safety level and is particularly suitable for places with high safety level.
At present, the process of the finger vein recognition technology is like the finger vein feature extraction and matching recognition method disclosed by the Chinese invention with the patent number of CN101840511B, and the method comprises the steps of acquiring a finger vein image through an infrared image acquisition device, preprocessing the image, extracting features, recognizing and analyzing; the preprocessing comprises graying the color image, extracting a finger region, adopting directional filtering and enhancement, extracting finger vein lines according to a finger contour mark and binaryzing, denoising by adopting an area elimination method, and standardizing the size of the image into a uniform image; the feature extraction method comprises the following steps: sub-block division is carried out on the finger vein grain diagram, and feature extraction is carried out on each sub-block image by adopting a bidirectional two-dimensional principal component analysis method of bidirectional feature value weighting block division; the identification analysis is to identify the characteristics of each sub-block as a whole by adopting a nearest neighbor classifier.
In the finger vein recognition, the quality of the finger vein image directly influences the quality of the recognition result. The main reason is that finger vein devices of different manufacturers are designed differently, for example, the device polishing modes include upper polishing, double-side polishing and single-side polishing, and the collected finger vein images have different degrees of difference on the premise of no unified standard; even the same model of equipment can cause the image quality of the finger veins to change due to individual element differences. Therefore, images presented by different devices in the same finger vein have different degrees, which still can affect one of the important factors of finger vein identification performance, especially for registration verification used by different devices.
In recent years, many methods for evaluating or enhancing the quality of finger vein images have been proposed. In the royal sciences and jun of the university of Harbin engineering in 2010, a finger vein image quality judging method is provided, namely, the quality scores are weighted and accumulated to carry out comprehensive evaluation by obtaining the contrast quality score, the position deviation quality score effective area quality score and the direction ambiguity quality score of the finger vein image; in 2011, national people's liberation military and national defense science and technology university xie zhan and the like, a vein image quality detection method for feature extraction is provided, and the method mainly comprises the steps of dividing a finger vein image into sections, extracting average gradient features of a finger vein area and finger vein texture minutiae, and judging the quality of the finger vein image according to the condition grading; the finger vein image quality evaluation method and system based on the convolutional neural network are provided in Chongqing industry and commerce university Qifeng and the like in 2017.
According to investigation, most of the enumerated methods are used for evaluating the quality of finger vein images, so that the quality of the images is judged, and the quality of the obtained finger vein images cannot be effectively improved, and the influence of the finger vein images on an identification task due to equipment difference cannot be reduced; when the image acquired by one device is verified after the image acquired by the other device is registered, the registration template cannot be updated according to the change of the finger vein image caused by the change of the device, or the user is inconvenient to directly update the registration templates of different devices, so that the finger vein recognition rate is reduced in different degrees. The problem of the deterioration of the finger vein recognition performance due to the image quality change of the finger vein caused by the authentication of one device with another device is not found, and no relevant solution is proposed by researchers.
Disclosure of Invention
The invention aims to solve the problem of low identification rate of quality change of a finger vein image caused by verification in the state of registering one device in the state of registering another device in the existing finger vein identification technology, and provides a method for generating a finger vein image suitable for multi-device identification based on an improved cGAN network.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
a method for generating an applicable multi-device identification finger vein image based on an improved cGAN network comprises the following steps:
1) acquiring or selecting a plurality of image samples X of a device0
2) For image sample X0Segmenting, extracting vein part, and obtaining background binary image X of veinbin
3) From the binary image XbinThe size and the format of the cGAN are matched with an improved cGAN network structure;
4) training an improved cGAN network, and training and updating a corresponding network parameter value according to the acquired data volume;
5) dividing and thinning the binary image X of the fine line of the finger veinbinAnd inputting the image into a trained improved cGAN network, and generating a finger vein image for finger vein registration identification.
Preferably, the structure of the improved cGAN network mainly comprises a generator G and a discriminator D, and when the improved cGAN network is trained each time, the discriminator D is trained first, then the generator G is trained, and finally parameters of the discriminator D and the generator G are updated according to a training result; wherein, part of parameters in the discriminator D only need to be trained once.
Preferably, the image inputted by the generator G is a binary image XbinThe output image is a finger vein gray level image X close to another target device imagegAnd according to the finger vein gray level image X of the target devicetA training generator.
Preferably, the discriminator D inputs the image X of the target devicetFinger vein grayscale image X generated by sum generatorgThe discriminator trains the target image XtImage X generated by AND generatorgLearning of middle gray scale, judging target image XtAnd the finger vein gray image X generated by the generatorgObtaining a discrimination image generation image XgWith the target image XtThe ability to differentiate between the degree of difference, if the generator generates a finger vein grayscale image XgCloser to the target image XtThen use the finger vein gray image XgReplacement target image Xt
Preferably, the generator G is formed by connecting a plurality of ENCODE blocks in series and then connecting a plurality of DECODE ENCODE blocks in series, each ENCODE block is formed by an asymmetric convolution layer, a normalization layer and an active layer, each DECODE block is formed by an asymmetric deconvolution layer, a normalization layer and an active layer, an asymmetric convolution adaptive module is connected between every two ENCODE blocks or two DECODE blocks, and the function of adapting the size of the feature map between the ENCODE modules is achieved.
Preferably, the discriminator D comprises a first part and a second part, wherein the first part is formed by connecting a plurality of ENCODE code blocks in series, each ENCODE code block comprises an asymmetric convolution layer, a normalization layer and an activation layer, an asymmetric convolution adaptive module is connected between every two ENCODEs to play a role in adapting the size of a feature map between the encoding modules before and after, and the first part plays a role in identifying true and false; the second part is formed by an asymmetric convolution network which is trained once, and the functions of extracting convolution characteristic vectors and measuring the distance are achieved.
The asymmetric convolutional layer coding/decoding structure of the generator G and the discriminator D is designed to extract features of higher levels of an image, and specific parameters of the asymmetric convolutional layer are set according to parameters of an input image.
Preferably, in the step 4), an objective function is obtained by training and improving a cGAN network process and principle, and a calculation method of the objective function is as follows:
Figure BDA0002355890870000041
Figure BDA0002355890870000042
Figure BDA0002355890870000043
wherein x is a target map, y is a generation map, z is a binary map, G (x, z) represents a generation map y having a generator, D (x, y) represents a discrimination result of the discriminator,
Figure BDA0002355890870000044
representing G to minimize the objective function and D to maximize the objective function,
Figure BDA0002355890870000045
representing the Euclidean distance, G, between the features extracted by the two after passing through the recognition network*And expressing a final objective function, namely the solved maximum function, and taking lambda as a hyper-parameter constant.
Preferably, in the step 4), a discriminator D is given in the optimal generator, and G is given*As a loss function l (g) for training the generator, and then updating the parameters of the generator by using Adam algorithm, the formula of the gradient descent parameter optimization process is as follows:
Figure BDA0002355890870000046
in the formula, thetaGParameters representing generators in the modified cGAN network;
when an initial value G of a generator is given0Need to find the order
Figure BDA0002355890870000047
Maximum D0,G*As a loss function L (D) for training the arbiter, the loss function of the update process of the arbiter is the training process of-L (D), and the formula of the parameter optimization process is as follows:
Figure BDA0002355890870000048
in the formula, thetaDA parameter representing a first partial network in the modified cGAN discriminator;
in the training internal loop, the steps of optimizing the first part of the discriminator D and optimizing the generator G are alternately carried out, the first part of the discriminator D and the generator G are updated, and the first part of the discriminator D and the generator G always tend to be close to the optimal solution.
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects:
the invention relates to a method for generating a finger vein image suitable for multi-device identification based on an improved cGAN network, which improves the structure of the cGAN network through design, and draws a finger vein gray image training generator and a discriminator D through a generator G according to an input binary image.
Drawings
Fig. 1 is a flow chart of a method for generating an applicable multi-device recognition finger vein image based on an improved cGAN network according to the present invention;
FIG. 2 is a functional schematic of a generator;
FIG. 3 is a functional schematic of an arbiter;
FIG. 4 is a schematic diagram of an ENCODE coding block structure;
FIG. 5 is a schematic diagram of a generator architecture;
FIG. 6 is a schematic diagram of a DECODE coding block structure;
FIG. 7 is a schematic diagram of the structure and principle of the discriminator;
FIG. 8 is a schematic diagram of the first part of the optimization principle of the discriminator;
fig. 9 is a schematic diagram of the generator optimization principle.
Detailed Description
For a further understanding of the present invention, reference will now be made in detail to the following examples, which are provided to illustrate, but are not intended to limit the scope of the invention.
Fig. 1 is a flowchart of a method for generating an applicable multi-device recognition finger vein image based on an improved cGAN network according to the present invention, and with reference to fig. 1, the method for generating an applicable multi-device recognition finger vein image based on an improved cGAN network includes the following steps:
1) capturing a finger near-infrared image as an image sample X using a finger vein image capturing device0Image sample X0A grayscale image having a size of 240 × 480 pixels.
2) Image sample X using adaptive thresholding0Segmenting, extracting vein part, and obtaining background binary image X of veinbin
3) From the binary image XbinThe size and the format of the cGAN network are designed into an improved cGAN network structure matched with the cGAN network, and the improved cGAN network structure mainly comprises a generator G and a discriminator D; as shown in fig. 2, 4 and 5, the generator G is formed by connecting a plurality of ENCODE blocks in series and then connecting a plurality of DECODE blocks in series, and an asymmetric convolution adaptive module is connected between every two ENCODE blocks or two DECODE blocks; with reference to fig. 3, 6 and 7, the discriminator D is composed of a first part and a second part, wherein the first part is formed by connecting a plurality of encodings in series, an asymmetric convolution adaptive module is connected between every two encodings, and the second part is formed by a convolution network which only needs one-time training;
in the design of an asymmetric convolution coding part of a generator, carrying out 8 times of ENCODE coding calculation of asymmetric convolution on an input 240X 480 image to obtain a tensor of 1X 2X 512; when designing the asymmetric deconvolution decoding part of the generator, firstly, performing asymmetric deconvolution DECODE decoding calculation on the input 1 × 2 × 512 tensor to obtain 1 × 2 × 512 tensor, connecting the consecutive 1 × 2 × 512 tensors into 2 × 4 × 512 tensor as output, sequentially performing the rest of asymmetric deconvolution decoding calculation, and finally obtaining the output of a 240 × 480 matrix; meanwhile, in the asymmetric convolutional encoding part, the convolutional characteristic calculated by the matrix with the same size is connected with the deconvolution characteristic with the same size in the asymmetric deconvolution decoding part to be used as the input characteristic of the next layer of the asymmetric deconvolution decoding part. The generator part of the countermeasure network is generated up to this point, and the parameter settings in the network are shown in table 1;
TABLE 1 Generator parameter Table
Figure BDA0002355890870000061
Figure BDA0002355890870000071
In the partial design of the discriminator, the input of the first part is a 240 × 280 × 2 tensor formed by connecting two 240 × 480 matrixes, and then the 8 × 16 × 1 tensor is finally obtained through a series of asymmetric convolution ENCODE coding calculation, so that the tensor can be used for discriminating whether an actual image is an image generated by a generator when the image is input; to this end, the arbiter portion of the countermeasure network is generated, with the parameter settings in the network as shown in table 2.
TABLE 2 first part parameter table of discriminator
Figure BDA0002355890870000072
Figure BDA0002355890870000081
In the design of the second part of the discriminator, the input of the second part is a 240 x 480 matrix, then a 1 x 128 vector is obtained through a series of asymmetric convolution ENCODE calculation, the vector distance between the target graph and the generated graph after convolution is calculated, and the distance value is used for updating parameters; arbiter for countermeasure network the parameter settings in the second part of the network are shown in table 3.
TABLE 3 discriminator second part parameter table
number layers shape strides future_size
1 input * * 240×480×1
2 layer_1 (2,3,1,64) [2,2} 120×240×64
3 layer_2 (2,3,64,128) [2,2} 60×120×128
4 layer_3 (2,3,128,256) [2,2] 30×60×256
5 layer_4 (2,3,256,512) [2,2] 16×30×512
6 layer_5 (2,3,512,256) [2,2] 8×16×256
7 layer_6 (2,3,256,256) [2,2] 4×8×256
8 layer_7 (2,3,256,256) [2,2] 2×4×256
9 layer_8 (2,3,256,128) [2,4] 1×1×128
10 output * * 1×128
4) TrainingImproving cGAN network, updating adaptive network training parameter value according to the collected data amount, wherein the image input by the generator G is a binary image X in the training processbinThe output image is a finger vein gray level image X close to the original image of the target equipmentgAnd according to the finger vein gray level image XgA training generator; target device image X input by discriminator DtFinger vein grayscale image X generated by sum generatorgThe discriminator trains the target device image XtImage X generated by AND generatorgLearning of medium gray scale, judging target device image XtAnd the finger vein gray image X generated by the generatorgObtaining a discrimination image generation image XgWith the target device image XtThe ability to differentiate between the degree of difference, if the generator generates a finger vein grayscale image XgCloser to target device image XtThen use the finger vein gray image XgReplacement target device image Xt
When the improved cGAN network is trained, firstly, a second part in the discriminator D is trained once, and then the parameters of the part are not changed; secondly, training the first part in the discriminator D, then training the generator G, and finally updating the parameters of the first part of the discriminator D and the generator G according to the training result;
the first part of the discriminant training process is shown in FIG. 8, using XinRepresenting Input image Input by XoutOutput image Output, denoted X, of the generatortRepresenting the Target image acquired by the Target device, the first step consisting in using XinInput generator for obtaining X for generating imageout(ii) a The second step is firstly composed of XoutAnd XinAt the same time, the input discriminator discriminates the true or false by comparison calculation, and then XtAnd XinMeanwhile, the input discriminator discriminates whether the product is true or false through comparison calculation, and respectively counts and judges error rate weighted sum; thirdly, adjusting the parameters of the discriminator by adopting a gradient descending method according to the sum of the judgment error rates of the discriminator, namely optimizing the discriminator, and continuously training the discriminator under the condition that the sample is continuously input;
Training the second part of the discriminator, training the part according to the general classification convolution network characteristic extraction part, and then using XoutAnd XtRespectively inputting the part and obtaining corresponding convolution characteristic vectors through convolution, so that the part has the capability of extracting image characteristics for measuring whether the two images are of the same type, and the parameters of the part are not changed after training;
training procedure of the Generator As shown in FIG. 9, with XinRepresenting Input image Input by XoutOutput image Output, denoted X, of the generatortRepresenting the Target device image Target. The first step consists of XinInput generator generates image Xout(ii) a The second step is firstly composed of XoutAnd XinAt the same time, the first part of the input discriminator is compared and calculated to discriminate the true and false, and then X is used to discriminate the true and falsetAnd XinSimultaneously inputting the first part of the discriminator to discriminate the true and false through comparison calculation, and then discriminating XoutAnd XtThe second part of the input discriminator calculates the distance between the two parts, and respectively counts and judges the error rate and the distance value to carry out weighted summation; thirdly, adjusting generator parameters by adopting a gradient reduction method according to the sum of the judgment error rates of the discriminators, namely training a generator;
an objective function is obtained through a training process and a principle of a discriminator and a generator, and the calculation method of the objective function is as follows:
Figure BDA0002355890870000101
Figure BDA0002355890870000102
Figure BDA0002355890870000103
in the formula, x is a target map, y is a map, z is a binary map, G (x, z) represents a generated map y having a generator, and D (x, y) represents a discriminatorAs a result of the determination of (1),
Figure BDA0002355890870000104
representing G to minimize the objective function and D to maximize the objective function,
Figure BDA0002355890870000105
representing the Euclidean distance, G, between the features extracted by the two after passing through the recognition network*Expressing a final objective function, namely a solved maximum function, wherein lambda is a hyper-parameter constant;
in finding the optimal generator, one discriminator D is given, G*As a loss function l (g) for training the generator, and then updating the parameters of the generator by using Adam algorithm, the formula of the gradient descent parameter optimization process is as follows:
Figure BDA0002355890870000106
in the formula, thetaGParameters representing generators in the modified cGAN network;
when an initial value G of a generator is given0Need to find the order
Figure BDA0002355890870000107
Maximum D0,G*As a loss function L (D) for training the arbiter, the loss function of the update process of the arbiter is the training process of-L (D), and the formula of the parameter optimization process is as follows:
Figure BDA0002355890870000108
in the formula, thetaDA parameter representing a first partial network in the modified cGAN discriminator;
in the training internal loop, the steps of optimizing the first part of the discriminator D and optimizing the generator G are alternately carried out, the first part of the discriminator D and the generator G are updated, and the first part of the discriminator D and the generator G always tend to be close to the optimal solution.
5) Dividing and thinning the binary image X of the fine line of the finger veinbinAnd inputting the image into a trained improved cGAN network, and generating a finger vein image for finger vein registration identification.
The invention adopts three different devices to collect images, and 3 batches of images of 300 fingers are collected. Wherein an infrared light source of the first device is above the finger and collects a first batch of images; the infrared light source of the second device is arranged on two sides of the finger, and a second batch of images are collected; the infrared source of the third device was on the left side of the finger and collected a third batch of images. 13 images are collected for each finger in the first batch, and 10 images are collected for each finger in the second batch and the third batch; randomly extracting 3 images in the first batch of images for registration, and using the first batch (the rest 10 images), the second batch and the third batch of original images for verification, simultaneously generating the same number of images for verification by using the verification images through the method of the invention, and comparing the verification passing rates. According to table 3, in the case of verification using original graphs, the average passing rate of verification using original graphs of the first batch of images (the remaining 10 images) after registration is 99.87%, the average passing rate of verification using original graphs of the second batch of images after registration is reduced by 2.26% to 97.61%, and the average passing rate of verification using original graphs of the third batch of images is reduced by 6.07% to 93.80%; under the condition that finger vein images are generated and verified by the method for generating the applicable multi-device identification finger vein images based on the improved cGAN network, the average passing rate of verification by adopting the generation graphs of the first batch of images after registration is 99.66 percent, the average passing rate of verification by adopting the generation graphs of the second batch of images after registration is reduced by 0.42 percent to 99.24 percent, and the average passing rate of verification by adopting the generation graphs of the third batch of images is reduced by 0.63 percent to 99.03 percent; finally, comparison shows that the verification passing rate is obviously reduced under the condition that the image adopting the first equipment is verified by adopting the original images of other equipment after being registered, but the verification passing rate is not obviously reduced when the generated image is verified by adopting the method of the invention under the state of other equipment, and the reduction range can still meet the use requirement.
TABLE 3 statistical table of passing rates
Figure BDA0002355890870000111
The present invention has been described in detail with reference to the embodiments, but the description is only for the preferred embodiments of the present invention and should not be construed as limiting the scope of the present invention. All equivalent changes and modifications made within the scope of the present invention shall fall within the scope of the present invention.

Claims (4)

1. A method for generating an applicable multi-device identification finger vein image based on an improved cGAN network is characterized in that: which comprises the following steps:
1) acquiring or selecting a plurality of image samples X0
2) For image sample X0Segmenting, extracting vein part, and obtaining background binary image X of veinbin
3) From the binary image XbinThe size and the format of the cGAN are designed into an improved cGAN network structure matched with the generator G, and the improved cGAN network structure mainly comprises a generator G and a discriminator D; the generator G is formed by connecting a plurality of ENCODE coding blocks in series and then connecting a plurality of DECODE coding blocks in series, each ENCODE coding block consists of an asymmetric convolution layer, a normalization layer and an activation layer, each DECODE coding block consists of an asymmetric deconvolution layer, a normalization layer and an activation layer, an asymmetric convolution self-adaptive module is connected between every two ENCODEs or between every two DECODEs, and the function of front and back adaptation of the size of a feature graph between the coding modules is achieved; the discriminator D consists of a first part and a second part, wherein the first part is formed by connecting a plurality of ENCODES in series, each ENCODE code block consists of an asymmetric convolution layer, a normalization layer and an activation layer, an asymmetric convolution self-adaptive module is connected between every two ENCODes, the function of adapting the sizes of feature maps between the encoding modules is achieved, and the first part has the function of distinguishing true and false; the second part is formed by an asymmetric convolution network which is trained only once and plays a role in extracting convolution characteristic vectors and measuring the distanceUsing;
4) training an improved cGAN network, wherein an image input by a generator G is a binary image XbinThe output image is a finger vein gray level image X close to the original imagegAnd according to the finger vein gray level image XtA training generator; the discriminator D is used for learning the gray levels of the target device image Xt and the image Xg generated by the generator, judging the difference between the gray level of the target device image Xt and the gray level of the finger vein gray level image Xg generated by the generator, acquiring the capability of the difference degree between the judgment image generated image Xg and the target device image Xt, and replacing the target device image Xt with the finger vein gray level image Xg if the finger vein gray level image Xg generated by the generator is closer to the target device image Xt; training and updating the adaptive network training parameter values according to the acquired data quantity;
5) dividing and thinning the binary image X of the fine line of the finger veinbinAnd inputting the image into a trained improved cGAN network, and generating a finger vein image for finger vein registration identification.
2. The method for generating an applicable multi-device recognition finger vein image based on an improved cGAN network according to claim 1, wherein: when the cGAN network is improved by training each time, firstly training the discriminator D, then training the generator G, and finally updating the parameters of the discriminator D and the generator G according to the training result; wherein, part of parameters in the discriminator D only need to be trained once.
3. The method for generating an applicable multi-device recognition finger vein image based on an improved cGAN network according to claim 1, wherein: in the step 4), an objective function is obtained by training and improving a cGAN network process and principle, and a calculation method of the objective function is as follows:
Figure DEST_PATH_IMAGE002
) (1),
Figure DEST_PATH_IMAGE004
(2),
Figure DEST_PATH_IMAGE006
(3),
wherein x is a target map, y is a generation map, z is a binary map, G (x, z) represents a generation map y having a generator, D (x, y) represents a discrimination result of the discriminator,
Figure DEST_PATH_IMAGE008
representing G to minimize the objective function and D to maximize the objective function,
Figure DEST_PATH_IMAGE010
representing the Euclidean distance between the features extracted after the two pass through the recognition network,
Figure DEST_PATH_IMAGE012
the final objective function, i.e. the solved most significant function,
Figure DEST_PATH_IMAGE014
is a super parameter constant.
4. The method for generating an image of a finger vein for adaptive multi-device recognition based on an improved cGAN network as claimed in claim 1 or 3, wherein: in the step 4), a discriminator D is given in the optimal generator to be found
Figure 812023DEST_PATH_IMAGE012
Loss function as training generator
Figure DEST_PATH_IMAGE016
Then adopt AdThe am algorithm updates the parameters of the generator, and the formula of the gradient descent parameter optimization process is as follows:
Figure DEST_PATH_IMAGE018
(4)
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE020
parameters representing generators in the modified cGAN network;
when an initial value G of a generator is given0Need to find the order
Figure DEST_PATH_IMAGE022
Largest size
Figure DEST_PATH_IMAGE024
Figure 967979DEST_PATH_IMAGE012
As a loss function L (D) for training the arbiter, the loss function of the update process of the arbiter is the training process of-L (D), and the formula of the parameter optimization process is as follows:
Figure DEST_PATH_IMAGE026
(5)
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE028
a parameter representing a first partial network in the modified cGAN discriminator;
in the training internal loop, the steps of optimizing the first part of the discriminator D and optimizing the generator G are alternately carried out, the first part of the discriminator D and the generator G are updated, and the first part of the discriminator D and the generator G always tend to be close to the optimal solution.
CN202010007624.1A 2020-01-04 2020-01-04 Method for generating applicable multi-device identification finger vein image based on improved cGAN network Active CN111209850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010007624.1A CN111209850B (en) 2020-01-04 2020-01-04 Method for generating applicable multi-device identification finger vein image based on improved cGAN network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010007624.1A CN111209850B (en) 2020-01-04 2020-01-04 Method for generating applicable multi-device identification finger vein image based on improved cGAN network

Publications (2)

Publication Number Publication Date
CN111209850A CN111209850A (en) 2020-05-29
CN111209850B true CN111209850B (en) 2021-02-19

Family

ID=70785576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010007624.1A Active CN111209850B (en) 2020-01-04 2020-01-04 Method for generating applicable multi-device identification finger vein image based on improved cGAN network

Country Status (1)

Country Link
CN (1) CN111209850B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950454B (en) * 2020-08-12 2024-04-02 辽宁工程技术大学 Finger vein recognition method based on bidirectional feature extraction
CN113689344B (en) * 2021-06-30 2022-05-27 中国矿业大学 Low-exposure image enhancement method based on feature decoupling learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215123A (en) * 2018-09-20 2019-01-15 电子科技大学 Unlimited landform generation method, system, storage medium and terminal based on cGAN
CN110264424A (en) * 2019-06-20 2019-09-20 北京理工大学 A kind of fuzzy retinal fundus images Enhancement Method based on generation confrontation network
CN110675353A (en) * 2019-08-31 2020-01-10 电子科技大学 Selective segmentation image synthesis method based on conditional generation countermeasure network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991368A (en) * 2017-02-20 2017-07-28 北京大学 A kind of finger vein checking personal identification method based on depth convolutional neural networks
CN109035149B (en) * 2018-03-13 2021-07-09 杭州电子科技大学 License plate image motion blur removing method based on deep learning
US10825219B2 (en) * 2018-03-22 2020-11-03 Northeastern University Segmentation guided image generation with adversarial networks
CN109166126B (en) * 2018-08-13 2022-02-18 苏州比格威医疗科技有限公司 Method for segmenting paint cracks on ICGA image based on condition generation type countermeasure network
CN110223259A (en) * 2019-06-14 2019-09-10 华北电力大学(保定) A kind of road traffic fuzzy image enhancement method based on production confrontation network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215123A (en) * 2018-09-20 2019-01-15 电子科技大学 Unlimited landform generation method, system, storage medium and terminal based on cGAN
CN110264424A (en) * 2019-06-20 2019-09-20 北京理工大学 A kind of fuzzy retinal fundus images Enhancement Method based on generation confrontation network
CN110675353A (en) * 2019-08-31 2020-01-10 电子科技大学 Selective segmentation image synthesis method based on conditional generation countermeasure network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks";Xiaohan Ding.et al;《arxiv:1908.03930v1》;20190811;期刊全文 *
"基于生成对抗网络的图像修复";孙全等;《计算机科学》;20181231;第45卷(第12期);全文 *
"生成对抗网络(GAN)相关笔记";Eree;《https://ereebay.me/posts/59881》;20190301;文章全文 *

Also Published As

Publication number Publication date
CN111209850A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN112580590B (en) Finger vein recognition method based on multi-semantic feature fusion network
CN111126240B (en) Three-channel feature fusion face recognition method
CN105718889B (en) Based on GB (2D)2The face personal identification method of PCANet depth convolution model
CN102156887A (en) Human face recognition method based on local feature learning
CN106529504B (en) A kind of bimodal video feeling recognition methods of compound space-time characteristic
CN106529468A (en) Finger vein identification method and system based on convolutional neural network
CN102902980B (en) A kind of biometric image analysis based on linear programming model and recognition methods
CN101710383A (en) Method and device for identity authentication
CN110443128A (en) One kind being based on SURF characteristic point accurately matched finger vein identification method
CN102521575A (en) Iris identification method based on multidirectional Gabor and Adaboost
CN111274915B (en) Deep local aggregation descriptor extraction method and system for finger vein image
CN115994907B (en) Intelligent processing system and method for comprehensive information of food detection mechanism
CN108596126A (en) A kind of finger venous image recognition methods based on improved LGS weighted codings
CN109840512A (en) A kind of Facial action unit recognition methods and identification device
CN114973307B (en) Finger vein recognition method and system for generating antagonism and cosine ternary loss function
CN109614869A (en) A kind of pathological image classification method based on multi-scale compress rewards and punishments network
CN100351852C (en) Iris recognition method based on wavelet transform and maximum detection
CN111209850B (en) Method for generating applicable multi-device identification finger vein image based on improved cGAN network
CN110046565A (en) A kind of method for detecting human face based on Adaboost algorithm
Heidari et al. A new biometric identity recognition system based on a combination of superior features in finger knuckle print images
CN115311746A (en) Off-line signature authenticity detection method based on multi-feature fusion
Raghavendra et al. An efficient finger vein indexing scheme based on unsupervised clustering
CN112101319A (en) Vein image classification method and device based on topographic point classification
CN105512682B (en) A kind of security level identification recognition methods based on Krawtchouk square and KNN-SMO classifier
CN107103289A (en) The method and system of writer verification are carried out using person's handwriting contour feature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A method of generating finger vein image suitable for multi device recognition based on improved cgan network

Effective date of registration: 20210927

Granted publication date: 20210219

Pledgee: Shanxi Financing Guarantee Co.,Ltd.

Pledgor: Holy Point Century Technology Co.,Ltd.

Registration number: Y2021140000037

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20221115

Granted publication date: 20210219

Pledgee: Shanxi Financing Guarantee Co.,Ltd.

Pledgor: Holy Point Century Technology Co.,Ltd.

Registration number: Y2021140000037

PC01 Cancellation of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A method for generating finger vein images suitable for multi device recognition based on an improved cGAN network

Granted publication date: 20210219

Pledgee: Bank of China Limited Taiyuan Binzhou sub branch

Pledgor: Holy Point Century Technology Co.,Ltd.

Registration number: Y2024140000011

PE01 Entry into force of the registration of the contract for pledge of patent right