CN110991374A - Fingerprint singular point detection method based on RCNN - Google Patents
Fingerprint singular point detection method based on RCNN Download PDFInfo
- Publication number
- CN110991374A CN110991374A CN201911255304.1A CN201911255304A CN110991374A CN 110991374 A CN110991374 A CN 110991374A CN 201911255304 A CN201911255304 A CN 201911255304A CN 110991374 A CN110991374 A CN 110991374A
- Authority
- CN
- China
- Prior art keywords
- network
- image
- fingerprint
- singular point
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1365—Matching; Classification
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a fingerprint singular point detection method based on RCNN, which comprises the following steps: constructing a data set, enhancing a fingerprint image, segmenting the fingerprint image, detecting singular points of the fingerprint image and checking the accuracy. Compared with the traditional fingerprint singular point detection method, the method innovatively combines convolution, carries out detection based on the RCNN framework, and has the advantages of high detection speed, high accuracy and high efficiency. The image enhancement process reduces the requirement on the quality of the fingerprint image, and the application of the block network simplifies and omits the operation of data enhancement in the prior processing method.
Description
Technical Field
The invention relates to an image singular point detection method, in particular to a fingerprint singular point detection method, belonging to the field of computer vision and deep learning.
Background
Because of its uniqueness, fingerprint images are now widely used as identification labels in access permissions, criminal investigations and other aspects, and people can determine the owner of a certain fingerprint image by judging the consistency of the used fingerprint image and the image in the database. The singular points are used as essential global features and obvious marks on the fingerprint image, have the features which are not changed along with rotation, deformation and the like, and are suitable for various fingerprint identification scenes such as fingerprint retrieval, fingerprint classification and the like.
The poincare index is applied in the singular point detection of the fingerprint, and the method using the poincare index is generally susceptible to picture noise and poor in performance on a low-quality fingerprint picture, and is easy to cause huge calculation amount. Most of the existing singular point detection methods are based on the increase of poincare exponent. If the poincare index and a multi-scale detection algorithm are combined, the method only needs to calculate singular points of possible regions, the detection speed can be effectively improved, but the detection accuracy is not ideal, and in addition, the performance of the zero-pole model combined with the Hough transform method is limited by the accuracy of the poincare index.
The deep convolutional neural network promotes the development of many advanced computer vision directions nowadays, is widely used in the fields of biological pattern recognition, video recognition and the like, and achieves good effect. The RCNN network has a strong effect on target detection, and uses a convolutional neural network with high capacity to propagate a candidate region from bottom to top so as to achieve the purposes of target positioning and segmentation. For the condition that the training data of the label is less, the RCNN can use the trained parameters as assistance to conduct fine adjustment, the recognition effect can be well improved, in addition, the RCNN adopts a mode that supervised pre-training is conducted under a large number of samples and fine adjustment is conducted under a small number of samples, and the problems that small samples are difficult to train and even are over-fitted are effectively solved.
The invention content is as follows:
in view of the defects of the conventional method, the present invention provides a fingerprint singular point detection method based on RCNN, the implementation process of which is shown in fig. 1, and the purpose of the present invention is to more efficiently and accurately extract the fingerprint singular point shown in fig. 2b from the fingerprint image shown in fig. 2a, and at the same time, to reduce the requirement on the quality of the sample fingerprint image.
In order to achieve the purpose, the invention carries out the following steps after the computer reads the original fingerprint image:
step one, constructing a data set: acquiring 256 x 320 original fingerprint gray level images containing noise points, manually enhancing the images, marking out a group channel, normalizing the images, and dividing a training set and a test set according to a ratio of 8: 2;
step two, image enhancement: and constructing a de-coding convolutional neural network for image enhancement, wherein the de-coding convolutional neural network consists of a coding network module and a decoding network module. Training an image enhancement network by using an original data set, and storing 256 x 320 fingerprint pictures output by network prediction as input of the third step;
step three, image segmentation: dividing the enhanced fingerprint image into a plurality of regions with the size of 41 x 41 according to a grid, manually marking the category to which each region belongs, representing the category by using a matrix as a group route, and then setting a probability threshold for screening classified results. The Res-net classifier is trained using the enhanced image dataset. Aiming at the output result of each region, reserving the region higher than the probability threshold value for detecting the coordinate of the singular point;
step four, singular point detection: taking the area image containing the singular points in the third step as input, taking the normalized fingerprint coordinates as output, and performing FCN training, wherein the essential is to perform regression on the proposed region of interest;
step five, accuracy calculation: and (4) extracting the prediction result of the FCN in the step four, comparing the prediction result with the true value, and calculating the prediction accuracy of the method. And taking the Euclidean distance between the predicted point and the real point as a basis, and regarding the point with the distance lower than the threshold value as successful detection.
Aiming at the first step, the artificial image enhancement means performs filtering, noise reduction and other operations by using an image processing technology, the tagging groudtruth means that the position of a singular point is manually tagged, the coordinate of the singular point is read, and the coordinate is stored as a csv file. The image normalization means that the gray values of all the pixel points are divided by 255, so that the gray values are in the range of [0,1 ].
And aiming at the second step, the image enhancement network consists of an encoder network and a decoder network, wherein the encoder network is structurally composed of two identical convolution layers (the convolution kernel is 3 x 3, the number of channels is 16 and 64 in sequence, and the step length is 1) and a maximum pooling layer (the window size is 2 x 2) module. The encoder network consists of two network modules including one upsampling layer (window size 2 x 2) and two convolutional layers (convolution kernel 3 x 3, channel number 64 and 16 in turn, step size 1), and finally, one convolutional layer with convolution kernel 1 x 1. In the training process, the mean square error is used as a loss function, and a random gradient descent algorithm is used for parameter optimization.
Aiming at the third step, the matrix for labeling each region type is as follows:
where C is a class matrix, Ci∈{0,1},i=1,2,3,c1Indicating whether or not the region contains singular points, c2Indicating whether the area contains a core point, c3Indicating whether the area contains triangle points. The probability threshold depends on the particular data set and is generally slightly less than the maximum of the prediction probability. The specific structure of Res-net in the step is as follows: a convolutional layer with convolution kernel of 5 × 5 and channel number of 16, a downsampled layer with window size of 2 × 2 and channel number of 16, a convolutional layer with convolution kernel of 5 × 5 and channel number of 32, and a residual network extended to the next convolutional layer, a downsampled layer with window size of 2 × 2 and channel number of 64, a convolutional layer with convolution kernel of 5 × 5 and channel number of 64, and a fully-connected layer. And setting the training parameters of the network as the network in the second step.
Aiming at the fourth step, the training set is a 41 × 41 regional gray image which is higher than the probability threshold in the third step, the coordinate of the singular point in the region is calculated through the coordinate marked by the original image, and the singular point coordinate is normalized, and the specific steps are as follows:
wherein xiAs original coordinates, xi' is the coordinate of the singular point in the area gray scale picture,is the coordinate value after the normalization,and n is the fingerprint picture in the data set. The FCN of this step consists of four similar modules, each module consisting of two convolutional layers (convolutional kernel 3 x 3, number of channels 16, 64, 128 and 256 in order) and one maximum pooling layer (window size 2 x 2), the number of fully connected layers being 2, and the number of nodes being 256 and 2, respectively. In this network, regression is performed using random gradient descent, and the network learns by back-propagating the mean square error. Because the input picture is small, the CNN in the step can be effectively learned, and the output predicted value has high accuracy.
For step five, the euclidean distance used is as follows:
wherein p isx,py,gx,gyRespectively representing the horizontal and vertical coordinates of the predicted point and the horizontal and vertical coordinates of the real singular point, and the threshold is a threshold value.
And aiming at the step five, the threshold value is determined according to the size of the picture, generally about one tenth of the image size, and the threshold value is 20 pixel points according to the size of the picture.
Description of the drawings:
FIG. 1 is a flow chart of one embodiment of the present invention
FIGS. 2a and b show the original drawing and the detection result of the embodiment of FIG. 1
The specific implementation process comprises the following steps:
the RCNN-based fingerprint singular point detection method is further described below with reference to a flowchart and an embodiment.
The whole method mainly comprises the following five steps: constructing a data set, enhancing a fingerprint image, segmenting the fingerprint image, detecting singular point coordinates and detecting accuracy.
Acquiring 256 x 320 fingerprint original gray level images containing noise points, manually enhancing the images, marking out a group route, normalizing the images, and dividing a training set and a test set according to a ratio of 8: 2;
and step two, constructing a de-coding convolutional neural network for image enhancement, wherein the de-coding convolutional neural network consists of a coding network module and a decoding network module. Training an image enhancement network by using an original data set, and storing 256 × 320 fingerprint images output by network prediction as input of the third step;
and step three, dividing the enhanced fingerprint image into regions with the size of 41 × 41 according to grids, manually marking the category to which each region belongs, representing the categories by using a matrix, and then setting a probability threshold value for screening classified results. Training a Res-net classifier by using the enhanced image data set, and reserving a region higher than a probability threshold value for singular point coordinate detection;
step four, taking the area image containing the singular points in the step three as input, taking the normalized fingerprint coordinates as output, and carrying out FCN training;
and step five, extracting the prediction result of the FCN in the step four, comparing the prediction result with the true value, and calculating the prediction accuracy of the method. And taking the Euclidean distance between the predicted point and the real point as a basis, and regarding the point with the distance lower than the threshold value as successful detection.
Aiming at the first step, the artificial image enhancement means performs filtering, noise reduction and other operations by using an image processing technology, the tagging groudtruth means that the position of a singular point is manually tagged, the coordinate of the singular point is read, and the coordinate is stored as a csv file. The image normalization means that the gray values of all the pixel points are divided by 255, so that the gray values are in the range of [0,1 ].
And aiming at the second step, the image enhancement network consists of an encoder network and a decoder network, wherein the encoder network is structurally composed of two identical convolution layers (the convolution kernel is 3 x 3, the number of channels is 16 and 64 in sequence, and the step length is 1) and a maximum pooling layer (the window size is 2 x 2) module. The encoder network consists of two network modules including one upsampling layer (window size 2 x 2) and two convolutional layers (convolution kernel 3 x 3, channel number 64 and 16 in turn, step size 1), and finally, one convolutional layer with convolution kernel 1 x 1. In the training process, the mean square error is used as a loss function, and a random gradient descent algorithm is used for parameter optimization.
Aiming at the third step, the matrix for labeling each region type is as follows:
where C is a class matrix, Ci∈{0,1},i=1,2,3,c1Indicating whether or not the region contains singular points, c2Indicating whether the area contains a core point, c3Indicating whether the area contains triangle points. The probability threshold depends on the particular data set and is generally slightly less than the maximum of the prediction probability. The specific structure of Res-net in the step is as follows: a convolutional layer with convolution kernel of 5 × 5 and channel number of 16, a downsampled layer with window size of 2 × 2 and channel number of 16, a convolutional layer with convolution kernel of 5 × 5 and channel number of 32, and a residual network extended to the next convolutional layer, a downsampled layer with window size of 2 × 2 and channel number of 64, a convolutional layer with convolution kernel of 5 × 5 and channel number of 64, and a fully-connected layer. And setting the training parameters of the network as the network in the second step.
Aiming at the fourth step, the training set is a 41 × 41 regional gray image which is higher than the probability threshold in the third step, the coordinate of the singular point in the region is calculated through the coordinate marked by the original image, and the singular point coordinate is normalized, and the specific steps are as follows:
wherein xiAs original coordinates, xi' is the coordinate of the singular point in the area gray scale picture,is the coordinate value after the normalization,and n is the fingerprint picture in the data set. The FCN of this step consists of four similar modules, each module consisting of two convolutional layers (convolutional kernel 3 x 3, number of channels 16, 64, 128 and 256 in order) and one maximum pooling layer (window size 2 x 2), the number of fully connected layers being 2, and the number of nodes being 256 and 2, respectively. In this network, regression is performed using random gradient descent, and the network learns by back-propagating the mean square error. Because the input picture is small, the CNN in the step can be effectively learned, and the output predicted value has high accuracy.
For step five, the euclidean distance used is as follows:
wherein p isx,py,gx,gyRespectively representing the horizontal and vertical coordinates of the predicted point and the horizontal and vertical coordinates of the real singular point, and the threshold is a threshold value.
And aiming at the step five, the threshold value is determined according to the size of the picture, generally about one tenth of the image size, and the threshold value is 20 pixel points according to the size of the picture.
The fingerprint singular point detection method based on the RCNN provided by the invention is based on the RCNN framework to achieve the advantages of high detection speed, high accuracy and high efficiency, the requirement on the quality of the fingerprint image is reduced in the image enhancement process, and the data enhancement operation is not required in the process to simplify the training process by the block network.
The method provided by the invention is described in detail, the invention principle and the implementation method are explained by applying specific examples, and the description of the examples is only used for understanding the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and the content of the present specification should not be construed as a limitation of the present invention.
Claims (9)
1. A fingerprint singular point detection method based on RCNN is characterized by comprising the following steps:
step 1) reading an original fingerprint image by a computer and then constructing a data set: acquiring 256 x 320 original fingerprint gray level images containing noise points, firstly performing artificial image enhancement and labeling group pitch, then performing normalization processing on the images, and simultaneously dividing a training set and a test set according to a ratio of 8: 2;
step 2) constructing a de-coding convolutional neural network for image enhancement, which consists of two modules, namely a coding network and a decoding network, training the image enhancement network by using an original data set, and storing 256 x 320 fingerprint pictures output by network prediction as input of 3);
step 3) dividing the enhanced fingerprint image into a plurality of 41-41 areas according to a grid, manually marking the category to which each area belongs, representing the category by using a matrix as a group route, setting a probability threshold value for screening the classified result, training a Res-net classifier by using the enhanced image data set, and reserving the area higher than the probability threshold value for the output result of each area for step 4); (ii) a
Step 4) taking the area image containing the singular points in the step three as input, taking the normalized fingerprint coordinates as output, and carrying out FCN training, wherein the FCN essentially carries out regression on the proposed region of interest;
and 5) extracting the prediction result of the FCN in the fourth step, comparing the prediction result with the true value, calculating the prediction accuracy of the method, taking the Euclidean distance between the prediction point and the true point as a basis, and regarding the point with the distance lower than the threshold as successful detection.
2. The method of claim 1, wherein: in the first step 1), the manual image enhancement means performs filtering, noise reduction and other operations by using an image processing technology, the labeling groudtuth means that the position of a singular point is manually labeled, the coordinate of the singular point is read and stored as a csv file, and the image normalization means that the gray value of all pixel points is divided by 255 so that the value is in the range of [0,1 ].
3. The method of claim 1, wherein: the image enhancement network in the step 2) is composed of an encoder network and a decoder network, the encoder network is structurally characterized in that the two encoder networks are respectively composed of two same convolution layers (convolution kernel is 3 x 3, the number of channels is 16 and 64 in sequence, the step length is 1) and a maximum pooling layer (window size is 2 x 2) module, the encoder network is composed of two network modules including one upper sampling layer (window size is 2 x 2) and two same convolution layers (convolution kernel is 3 x 3, the number of channels is 64 and 16 in sequence, and the step length is 1), finally, the convolution layers with convolution kernel 1 x 1 are passed, the mean square error is used as a loss function in the training process, and the random gradient descent algorithm is used for parameter optimization.
4. The method of claim 1, wherein: the matrix labeled in the step (3) for each region type is as follows:
where C is a class matrix, Ci∈{0,1},i=1,2,3,c1Indicating whether or not the region contains singular points, c2Indicating whether the area contains a core point, c3Indicating whether triangular points are included in the region, and the probability threshold is determined by the particular data set and is generally slightly less than the maximum value of the prediction probability.
5. The method of claim 1, wherein: the specific structure of Res-net in the step (3) is as follows: and connecting a convolution layer with convolution kernel of 5 × 5 and channel number of 16 with a window size of 2 × 2 and channel number of 16, then connecting convolution layers with convolution kernel of 5 × 5 and channel number of 32 and extending to a residual network of the next convolution layer, then connecting the convolution layers with window size of 2 × 2 and channel number of 64, and finally connecting the convolution layers with convolution kernel of 5 × 5 and channel number of 64 and a full connection layer, wherein the training parameter setting of the network is the same as that of the network in the second step.
6. The method of claim 1, wherein: the training set in the step (4) is a 41 × 41 pixel region grayscale picture higher than the probability threshold in the third step, and we need to calculate the singular point coordinates in the region through the coordinates labeled by the original image and normalize the coordinates, and the specific steps are as follows:
7. The method of claim 1, wherein: the FCN in step (4) is composed of four similar modules, each module is composed of two convolutional layers (convolutional kernel is 3 × 3, number of channels is 16, 64, 128 and 256 in order) and one maximum pooling layer (window size is 2 × 2), the number of layers of the fully-connected layer is 2, the number of nodes is 256 and 2 respectively, in the network, regression is performed using random gradient descent, and the network learns by back-propagating the mean square error.
8. The method of claim 1, wherein: the euclidean distance used in the step 5) is as follows:
wherein p isx,py,gx,gyRespectively representing the horizontal and vertical coordinates of the predicted point and the horizontal and vertical coordinates of the real singular point, and the threshold is a threshold value.
9. The method of claim 1, wherein: the threshold value in the step (5) is determined according to the size of the picture, generally about one tenth of the image size, and the threshold value is 20 pixel points according to the size of the picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911255304.1A CN110991374B (en) | 2019-12-10 | 2019-12-10 | Fingerprint singular point detection method based on RCNN |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911255304.1A CN110991374B (en) | 2019-12-10 | 2019-12-10 | Fingerprint singular point detection method based on RCNN |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110991374A true CN110991374A (en) | 2020-04-10 |
CN110991374B CN110991374B (en) | 2023-04-04 |
Family
ID=70091666
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911255304.1A Active CN110991374B (en) | 2019-12-10 | 2019-12-10 | Fingerprint singular point detection method based on RCNN |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110991374B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112818797A (en) * | 2021-01-26 | 2021-05-18 | 厦门大学 | Consistency detection method and storage device for answer sheet document images of online examination |
CN113705519A (en) * | 2021-09-03 | 2021-11-26 | 杭州乐盯科技有限公司 | Fingerprint identification method based on neural network |
CN115187570A (en) * | 2022-07-27 | 2022-10-14 | 北京拙河科技有限公司 | Singular traversal retrieval method and device based on DNN deep neural network |
Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130125189A1 (en) * | 2010-04-09 | 2013-05-16 | Alcatel Lucent | Multimedia content broadcast procedure |
CN104933722A (en) * | 2015-06-29 | 2015-09-23 | 电子科技大学 | Image edge detection method based on Spiking-convolution network model |
US20160117587A1 (en) * | 2014-10-27 | 2016-04-28 | Zhicheng Yan | Hierarchical deep convolutional neural network for image classification |
US20170169313A1 (en) * | 2015-12-14 | 2017-06-15 | Samsung Electronics Co., Ltd. | Image processing apparatus and method based on deep learning and neural network learning |
WO2017133009A1 (en) * | 2016-02-04 | 2017-08-10 | 广州新节奏智能科技有限公司 | Method for positioning human joint using depth image of convolutional neural network |
US20170300785A1 (en) * | 2016-04-14 | 2017-10-19 | Linkedln Corporation | Deep convolutional neural network prediction of image professionalism |
US20170300811A1 (en) * | 2016-04-14 | 2017-10-19 | Linkedin Corporation | Dynamic loss function based on statistics in loss layer of deep convolutional neural network |
WO2017215284A1 (en) * | 2016-06-14 | 2017-12-21 | 山东大学 | Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network |
WO2018089210A1 (en) * | 2016-11-09 | 2018-05-17 | Konica Minolta Laboratory U.S.A., Inc. | System and method of using multi-frame image features for object detection |
CN108509839A (en) * | 2018-02-02 | 2018-09-07 | 东华大学 | One kind being based on the efficient gestures detection recognition methods of region convolutional neural networks |
CN108645498A (en) * | 2018-04-28 | 2018-10-12 | 南京航空航天大学 | Impact Location Method based on phase sensitivity light reflection and convolutional neural networks deep learning |
US20180330198A1 (en) * | 2017-05-14 | 2018-11-15 | International Business Machines Corporation | Systems and methods for identifying a target object in an image |
CN108830908A (en) * | 2018-06-15 | 2018-11-16 | 天津大学 | A kind of magic square color identification method based on artificial neural network |
WO2018214195A1 (en) * | 2017-05-25 | 2018-11-29 | 中国矿业大学 | Remote sensing imaging bridge detection method based on convolutional neural network |
CN109214441A (en) * | 2018-08-23 | 2019-01-15 | 桂林电子科技大学 | A kind of fine granularity model recognition system and method |
CN109543643A (en) * | 2018-11-30 | 2019-03-29 | 电子科技大学 | Carrier signal detection method based on one-dimensional full convolutional neural networks |
CN109767423A (en) * | 2018-12-11 | 2019-05-17 | 西南交通大学 | A kind of crack detection method of bituminous pavement image |
CN109815156A (en) * | 2019-02-28 | 2019-05-28 | 北京百度网讯科技有限公司 | Displaying test method, device, equipment and the storage medium of visual element in the page |
CN109948566A (en) * | 2019-03-26 | 2019-06-28 | 江南大学 | A kind of anti-fraud detection method of double-current face based on weight fusion and feature selecting |
US20190266404A1 (en) * | 2018-01-30 | 2019-08-29 | Magical Technologies, Llc | Systems, Methods and Apparatuses to Generate a Fingerprint of a Physical Location for Placement of Virtual Objects |
CN110232380A (en) * | 2019-06-13 | 2019-09-13 | 应急管理部天津消防研究所 | Fire night scenes restored method based on Mask R-CNN neural network |
US20190294758A1 (en) * | 2018-03-26 | 2019-09-26 | Uchicago Argonne, Llc | Identification and localization of rotational spectra using recurrent neural networks |
CN110472623A (en) * | 2019-06-29 | 2019-11-19 | 华为技术有限公司 | Image detecting method, equipment and system |
-
2019
- 2019-12-10 CN CN201911255304.1A patent/CN110991374B/en active Active
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130125189A1 (en) * | 2010-04-09 | 2013-05-16 | Alcatel Lucent | Multimedia content broadcast procedure |
US20160117587A1 (en) * | 2014-10-27 | 2016-04-28 | Zhicheng Yan | Hierarchical deep convolutional neural network for image classification |
CN104933722A (en) * | 2015-06-29 | 2015-09-23 | 电子科技大学 | Image edge detection method based on Spiking-convolution network model |
US20170169313A1 (en) * | 2015-12-14 | 2017-06-15 | Samsung Electronics Co., Ltd. | Image processing apparatus and method based on deep learning and neural network learning |
WO2017133009A1 (en) * | 2016-02-04 | 2017-08-10 | 广州新节奏智能科技有限公司 | Method for positioning human joint using depth image of convolutional neural network |
US20170300811A1 (en) * | 2016-04-14 | 2017-10-19 | Linkedin Corporation | Dynamic loss function based on statistics in loss layer of deep convolutional neural network |
US20170300785A1 (en) * | 2016-04-14 | 2017-10-19 | Linkedln Corporation | Deep convolutional neural network prediction of image professionalism |
WO2017215284A1 (en) * | 2016-06-14 | 2017-12-21 | 山东大学 | Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network |
WO2018089210A1 (en) * | 2016-11-09 | 2018-05-17 | Konica Minolta Laboratory U.S.A., Inc. | System and method of using multi-frame image features for object detection |
US20180330198A1 (en) * | 2017-05-14 | 2018-11-15 | International Business Machines Corporation | Systems and methods for identifying a target object in an image |
WO2018214195A1 (en) * | 2017-05-25 | 2018-11-29 | 中国矿业大学 | Remote sensing imaging bridge detection method based on convolutional neural network |
US20190266404A1 (en) * | 2018-01-30 | 2019-08-29 | Magical Technologies, Llc | Systems, Methods and Apparatuses to Generate a Fingerprint of a Physical Location for Placement of Virtual Objects |
CN108509839A (en) * | 2018-02-02 | 2018-09-07 | 东华大学 | One kind being based on the efficient gestures detection recognition methods of region convolutional neural networks |
US20190294758A1 (en) * | 2018-03-26 | 2019-09-26 | Uchicago Argonne, Llc | Identification and localization of rotational spectra using recurrent neural networks |
CN108645498A (en) * | 2018-04-28 | 2018-10-12 | 南京航空航天大学 | Impact Location Method based on phase sensitivity light reflection and convolutional neural networks deep learning |
CN108830908A (en) * | 2018-06-15 | 2018-11-16 | 天津大学 | A kind of magic square color identification method based on artificial neural network |
CN109214441A (en) * | 2018-08-23 | 2019-01-15 | 桂林电子科技大学 | A kind of fine granularity model recognition system and method |
CN109543643A (en) * | 2018-11-30 | 2019-03-29 | 电子科技大学 | Carrier signal detection method based on one-dimensional full convolutional neural networks |
CN109767423A (en) * | 2018-12-11 | 2019-05-17 | 西南交通大学 | A kind of crack detection method of bituminous pavement image |
CN109815156A (en) * | 2019-02-28 | 2019-05-28 | 北京百度网讯科技有限公司 | Displaying test method, device, equipment and the storage medium of visual element in the page |
CN109948566A (en) * | 2019-03-26 | 2019-06-28 | 江南大学 | A kind of anti-fraud detection method of double-current face based on weight fusion and feature selecting |
CN110232380A (en) * | 2019-06-13 | 2019-09-13 | 应急管理部天津消防研究所 | Fire night scenes restored method based on Mask R-CNN neural network |
CN110472623A (en) * | 2019-06-29 | 2019-11-19 | 华为技术有限公司 | Image detecting method, equipment and system |
Non-Patent Citations (3)
Title |
---|
LEANNE ATTARD: "《Automatic Crack Detection using Mask R-CNN》", 《2019 11TH INTERNATIONAL SYMPOSIUM ON IMAGE AND SIGNAL PROCESSING AND ANALYSIS》 * |
T.HOANG NGAN LE: "《Multiple Scale Faster-RCNN Approach to Driver"s Cell-phone Usage and Hands on Steering Wheel Detection》", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN PECOGNITION WORKSHOPS》 * |
王晓东: "《基于稀疏特征学习的复杂图像分类》", 《中国优秀博士学位论文全文数据库》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112818797A (en) * | 2021-01-26 | 2021-05-18 | 厦门大学 | Consistency detection method and storage device for answer sheet document images of online examination |
CN112818797B (en) * | 2021-01-26 | 2024-03-01 | 厦门大学 | Consistency detection method and storage device for online examination answer document images |
CN113705519A (en) * | 2021-09-03 | 2021-11-26 | 杭州乐盯科技有限公司 | Fingerprint identification method based on neural network |
CN113705519B (en) * | 2021-09-03 | 2024-05-24 | 杭州乐盯科技有限公司 | Fingerprint identification method based on neural network |
CN115187570A (en) * | 2022-07-27 | 2022-10-14 | 北京拙河科技有限公司 | Singular traversal retrieval method and device based on DNN deep neural network |
Also Published As
Publication number | Publication date |
---|---|
CN110991374B (en) | 2023-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110414368B (en) | Unsupervised pedestrian re-identification method based on knowledge distillation | |
CN100565559C (en) | Image text location method and device based on connected component and support vector machine | |
CN110298343A (en) | A kind of hand-written blackboard writing on the blackboard recognition methods | |
CN113486886B (en) | License plate recognition method and device in natural scene | |
CN112580507B (en) | Deep learning text character detection method based on image moment correction | |
CN111027539B (en) | License plate character segmentation method based on spatial position information | |
CN110991374B (en) | Fingerprint singular point detection method based on RCNN | |
CN112270317B (en) | Reading identification method of traditional digital water meter based on deep learning and frame difference method | |
CN109840483B (en) | Landslide crack detection and identification method and device | |
CN112307919B (en) | Improved YOLOv 3-based digital information area identification method in document image | |
CN112232371A (en) | American license plate recognition method based on YOLOv3 and text recognition | |
CN109635726B (en) | Landslide identification method based on combination of symmetric deep network and multi-scale pooling | |
CN113313678A (en) | Automatic sperm morphology analysis method based on multi-scale feature fusion | |
CN117217368A (en) | Training method, device, equipment, medium and program product of prediction model | |
CN116152824A (en) | Invoice information extraction method and system | |
CN115546553A (en) | Zero sample classification method based on dynamic feature extraction and attribute correction | |
CN114898290A (en) | Real-time detection method and system for marine ship | |
CN113392837B (en) | License plate recognition method and device based on deep learning | |
CN113569835A (en) | Water meter numerical value reading method based on target detection and segmentation identification | |
CN109284752A (en) | A kind of rapid detection method of vehicle | |
CN114998689B (en) | Track data set generation method, track identification method and system | |
CN111242114A (en) | Character recognition method and device | |
CN113343977B (en) | Multipath automatic identification method for container terminal truck collection license plate | |
CN113192018B (en) | Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network | |
CN116309601B (en) | Leather defect real-time detection method based on Lite-EDNet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |