CN115311746A - Off-line signature authenticity detection method based on multi-feature fusion - Google Patents
Off-line signature authenticity detection method based on multi-feature fusion Download PDFInfo
- Publication number
- CN115311746A CN115311746A CN202210867739.7A CN202210867739A CN115311746A CN 115311746 A CN115311746 A CN 115311746A CN 202210867739 A CN202210867739 A CN 202210867739A CN 115311746 A CN115311746 A CN 115311746A
- Authority
- CN
- China
- Prior art keywords
- signature
- image
- method based
- detection method
- feature fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/30—Writer recognition; Reading and verifying signatures
- G06V40/33—Writer recognition; Reading and verifying signatures based only on signature image, e.g. static signature recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/18—Extraction of features or characteristics of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/18—Extraction of features or characteristics of the image
- G06V30/1801—Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections
- G06V30/18019—Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections by matching or filtering
- G06V30/18038—Biologically-inspired filters, e.g. difference of Gaussians [DoG], Gabor filters
- G06V30/18048—Biologically-inspired filters, e.g. difference of Gaussians [DoG], Gabor filters with interaction between the responses of different filters, e.g. cortical complex cells
- G06V30/18057—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/18—Extraction of features or characteristics of the image
- G06V30/18133—Extraction of features or characteristics of the image regional/local feature not essentially salient, e.g. local binary pattern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/19127—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/1918—Fusion techniques, i.e. combining data from various sources, e.g. sensor fusion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an off-line signature authenticity detection method based on multi-feature fusion, which comprises the following steps: the method comprises the steps of dividing a training data set and a testing data set in an equal proportion at random, preprocessing image samples of a signature data set, extracting local binary pattern features, selecting an optimal distance measurement function, extracting the features of a Gabor filter, selecting an optimal combination design and feature vectors of the frequency and angle of a Gabor convolution kernel, extracting shape factor features, selecting a series number which can achieve the optimal effect, fusing the features obtained by the optimal solution, and calculating the final precision on the testing set. The invention integrates the advantages of the local binary pattern characteristic, the Gabor filter characteristic and the shape factor characteristic, has the characteristics of high sensitivity to the light source in the picture sample and coincidence with a human vision receiving field model, and improves and promotes the anti-counterfeiting detection effect of the signature of the party on the document contract with the legal effect and handwriting identification in daily life compared with other methods.
Description
Technical Field
The invention belongs to the technical field of machine learning, image processing and signature anti-counterfeiting correlation, and particularly relates to an off-line signature authenticity detection method based on multi-feature fusion.
Background
The 'signature' is one of the long-history biological feature recognition methods, and the uniqueness and convenience of the 'signature' are superior to those of other feature recognition methods. The method is mainly characterized in that complicated electronic equipment is not needed for signature authentication, and signature handwriting with the style of a person to be tested can be acquired only by paper and pens under normal conditions. Today's reliable signature authentication also relies heavily on the personal experience of experts, which often is somewhat subjective. But also is limited by the number of the experts, and is difficult to be widely applied in space and time.
The signature authenticity detection is essentially a classification task, and the traditional classifier has defects to a certain extent. A Support Vector Machine (SVM) is difficult to implement as a classical binary classifier on large-scale training samples in practice; the problem of data imbalance is difficult to process by Logistic Regression (Logistic Regression); the decision tree model is prone to generate an excessively complex model, and the generalization performance of such a model to data is poor.
In recent years, a method of deep learning instead of traditional machine learning is emerging, and although relatively satisfactory detection accuracy is obtained in result, the application of signature anti-counterfeiting is not popularized in the market due to high requirements on hardware equipment and high cost.
Disclosure of Invention
In order to overcome the defects, the invention aims to provide an off-line signature authenticity detection method based on multi-feature fusion. The method utilizes the self-carrying normalization of the shape index and the characteristic capable of reflecting the local shape, and performs characteristic fusion on the signature characteristic extracted through the shape index and the characteristic extracted through the traditional local binary pattern and the Gabor filter.
In order to achieve the purpose, the following technical scheme is provided:
an off-line signature authenticity detection method based on multi-feature fusion comprises the following steps:
1) Dividing a training data set and a testing data set at equal proportion randomly;
2) Preprocessing a signature data set image sample;
3) Extracting local binary pattern features, and selecting an optimal distance measurement function;
4) Extracting the characteristics of a Gabor filter, and selecting the optimal combination design of the frequency and the angle of a Gabor convolution kernel and a characteristic vector;
5) Extracting the characteristic of the shape factor, and selecting the stage number which can achieve the best effect;
6) Fusing the characteristics obtained by the optimal solution from the step 3) to the step 5), and calculating the final precision on the test set.
Further, the step 1) specifically comprises the following steps: the data set has a total of M signature samples, each signature sample has 20 copies and 20M copies, each true signature sample has 8 copies of each two persons, namely 320M copies of false signature, and every two true and false signatures are combined to form S 1 =320M pairs of data samples, true signature and true signature are combined pairwise to formFor data samples, then the total data sample S = S 1 +S 2 For each, the total data samples are divided evenly, and then the number of samples T in the training set is obtained 1 = number of samples in test set T 2 I.e. T 1 =T 2 =S/2。
Further, the step 2) specifically comprises the following steps:
2.1 The sample pictures in the public data set often have table boundaries, the boundary lines are often redundant noise interference and can have large influence on results, so that a signature area needs to be separately segmented, a Canny operator is used for obtaining image edges, hough line detection and grid point coordinate positioning are carried out, image segmentation is carried out according to coordinates, and grid lines are removed;
2.2 Considering that some characteristics of the signature handwriting have certain sensitivity to rotation, the method of solving the longest principal axis of inertia is adopted to correct the inclination of the sample;
2.3 The regional handwriting of the obtained signature handwriting has lighter color, and the regional handwriting cannot be clearly distinguished by using a common standard global binarization method, so that the handwriting with light color is distinguished by using a block-based binarization method, and the handwriting with light color is reserved;
2.4 Handwriting thickness varies according to different models of pens used for hesitant signature, and although the handwriting thickness can reflect a part of writing style of a writer, the same handwriting thickness is necessary when the morphological characteristics of the handwriting are concerned, so that skeleton extraction operation needs to be carried out on the handwriting.
Further, the step 3) specifically comprises the following steps: and setting the field of the local binary pattern as a circle of a central pixel point for sampling, and using a chi-square distance measurement function on the circular local binary pattern characteristic to obtain a result to be measured.
Further, the operation mode of the step 4) is as follows: and (3) using a Gabor convolution kernel modulated by a Gaussian function and a cosine function, reflecting handwriting texture characteristics in different directions and scales by adjusting parameters of the Gabor convolution kernel, and carrying out weight normalization on the Gabor convolution kernel template L1.
Further, the specific operation mode of normalizing the weight of the Gabor convolution kernel template L1 is as follows: the sample image after the binarization preprocessing is sent into a Gabor filter kernel with the angles of 0, pi/4, pi/2, 3 pi/4 and the frequency of 8, a filter kernel template L1 is subjected to normalization operation, the image is subjected to the filter operation to obtain a gray level image with gray level between (-1, 1), 64 levels are uniformly taken to count a gray level histogram, the gray level distribution is counted by normalization, namely a probability density function is calculated, and the chi-square distance is compared.
Further, the step 5) specifically comprises the following steps:
5.1 Computing a shape index, removing undefined parts, and only paying attention to the curvature of the handwriting edge;
5.2 Take the histogram of the k-level calculation image similarly, and calculate the chi-square distance, judge the true or false of the signature to be measured by comparing the distance inside or outside the class.
Further, the step 6) specifically comprises the following steps:
6.1 Fusing the local binary pattern features, the Gabor filter features and the shape factor features, and adopting PCA to reduce the dimension;
6.2 The predicted values of the three individual features after regression are weighted and calculated by logistic regression by using an ensemble learning method.
The invention has the beneficial effects that: the method integrates the advantages of the local binary pattern characteristic, the Gabor filter characteristic and the shape factor characteristic, has the characteristics of high sensitivity to a light source in a picture sample and coincidence with a human vision receiving field model, improves and promotes the anti-counterfeiting detection effect of the signature of a party on a document contract with handwriting identification and legal effect in daily life compared with other methods, and has great significance for improving the powerful guarantee of judicial identification procedures.
Drawings
FIG. 1 is a schematic overview of the process of the present invention;
FIG. 2 is a schematic diagram of an image sample preprocessing flow according to the present invention;
FIG. 3 is a schematic diagram of a process for extracting signature image features by a Gabor filter;
FIG. 4 is a histogram of chi-squared distance comparison of pairs of characteristic authenticity samples extracted using a Gabor filter in accordance with the present invention;
FIG. 5 is a schematic representation of a three-dimensional shape corresponding to the use of a shape factor in the present invention;
FIG. 6 is a schematic diagram of a process for extracting signature image features using shape index according to the present invention;
FIG. 7 is a chi-squared distance versus histogram of shape factor feature true and false sample pairs in the present invention.
Detailed Description
The invention will be further described with reference to examples and figures of the specification, to which, however, the scope of the invention is not limited.
Referring to fig. 1, an off-line signature authenticity detection method based on multi-feature fusion includes the following steps:
1) Collecting a data set: the method comprises the steps of collecting signature handwriting of 31 persons, collecting 20 parts of true signatures by each person, enabling the other two persons to write in an imitated way respectively, and enabling each person to write 8 parts of false signatures in an imitated way, so that the signature handwriting of the other two persons can be combined to form a total signatureSignatures were collected in a table format for true signature pairs and 31 · 20 · 16=9920 for false signature pairs, with the table cell size being scanned at 400dpi with reference to SigComp2011 settings, 59mm wide and 23mm high.
2) Referring to fig. 2, the signature table image collected through step 1) is preprocessed and separated into individual signature image samples, as follows:
2.1 First using the Canny operator to compute the edges of the resulting signature table image, comprising: gaussian filtering, gradient calculation, non-maximum value inhibition, bilateral threshold processing and lagging boundary tracking, wherein the operation of the non-maximum value inhibition is that each pixel point moves along the positive direction and the negative direction of the gradient, the maximum gradient of the passing pixel point is calculated, if the gradient of the pixel point is not the maximum, the gradient of the pixel point is set to be 0, and the bilateral threshold processing is provided with two empirical thresholds th 1 、th 2 And th of 1 ≤th 2 These two thresholds divide the image into three parts: strong edge point (I (x, y) ≧ th 2 ) Weak edge points (th) 1 ≤I(x,y)≤th 2 ) Background point (I (x, y) < th 1 ) In the lagging boundary tracking operation, in an 8-connected domain (eight pixels which are connected with each other around by taking a pixel as a center), the edge point is connected with a strong edge point or indirectly connected with a weak edge point, and the solution can be specifically carried out through a flooding algorithm.
2.2 Then hough line detection is carried out, whether corresponding lines exist is judged by calculating whether the number of foreground points on a given line exceeds a threshold value, and a line equation passing through (x, y) points is considered: rho = xcos theta + ysin theta, each pair of rho and theta determines a straight line, whether the corresponding straight line exists can be determined by only counting the occurrence frequency of (rho and theta), theta is taken at equal intervals in a search range according to required precision for each foreground point, and the corresponding rho is calculated.
2.3 Because of the deformation similar to affine transformation in the process of scanning the image, the intersection points obtained by hough line detection cannot correspond to the coordinates of the known ideal grid points one by one, so that the optimal inverse affine transformation is found, which is equivalent to solving the least square solution of the following over-determined equation,
(X|Y|1)W=Ideal
where Ideal denotes the coordinates of the Ideal grid point.
2.4 In order to make the binarization process able to distinguish lighter handwriting, a block-based binarization method is preferably used, and whether the anchor point is a foreground point is determined according to the weighting of the gray values in the block and the hyper-parameter C.
2.5 The longest principal axis of inertia is obtained by finding the eigenvector corresponding to the largest eigenvalue of the following matrix:
x c =mean(X),y c =mean(Y)
where i, j is the order of the center distance, x c Is the X-coordinate mean of the foreground point, X represents the X-coordinate vector of the foreground point, and Y represents the Y-coordinate vector of the foreground point.
2.6 Skeleton extraction is based on the Zhang-Suen algorithm, considering the 3 x 3 field of anchor points,
firstly, deleting pixel points meeting the following conditions: (1) n is more than or equal to 2 and less than or equal to 6; (2) s =1; (3) p 2 P 4 P 6 =0;④P 4 P 6 P 8 =0, where N represents the number of foreground points on the circumference and S represents the number of transitions from 0 to 1 on the circumference; then deleting the pixel points which meet the following conditions: (1) n is more than or equal to 2 and less than or equal to 6; (2) s =1; (3) p is 2 P 4 P 8 =0;④P 2 P 6 P 8 And =0, continuously circulating until any pixel point can not be eliminated in two steps.
3) Using original LBP (local binary pattern) features with radius of 3 to extract features, using a chi-square distance measurement function to test effects on the extracted features, dividing an image into a plurality of small blocks by the LBP of an original version, coding gray values of 8 fields of pixel points in each small block into 8-bit binary numbers with the range of [0,255], then counting the frequency of each pattern to obtain 256-bit feature vectors, and finally linking the feature vectors of each small block to obtain the feature vector of the whole image, wherein the coding of one pixel point is as follows:
whereinRepresenting binary bits as g in sequence 1 g 2 g 3 g 5 g 8 g 7 g 6 g 4 A binary number of (c).
The reason for adopting the uniform sampling mode that the original LBP field is set to have the radius of 3 by taking the central pixel point as the circle center is to consider the characteristics on a larger scale, and the original LBP characteristics have certain robustness on the illumination of the image. In the experimental process of the present invention, it was found that the effect of chi-square distance is better than that of other distances.
4) Referring to fig. 3, the sample image is binarized to 0 and 1, a Gabor convolution kernel template is prepared, and the optimal Gabor parameter is adjusted, and the convolution kernel is used for the purpose that, as the real number part of the Gabor filter, the Gabor filter is modulated by a gaussian function and a cosine function, a spatial domain and a time domain are considered, and texture features of different scales and directions can be reflected by adjusting the parameter, and the specific response function is as follows:
where θ represents the direction of the convolution kernel; sigma reaction window size; lambda reflects the degree of densification of the texture; psi denotes the phase parameter of the cosine function in the Gabor kernel function; γ represents the aspect ratio and determines the ellipticity of the shape of the Gabor function.
In addition, the weight normalization of the Gabor convolution kernel template L1 is needed, the gray value of the image filtered by the Gabor filter is between (-1, 1), then 64 levels are uniformly taken to count the gray histogram, the normalization is carried out to count the distribution of the gray, namely, the probability density function is calculated, and finally, the chi-square distance is compared, in the invention, the frequency of the Gabor convolution kernel is adjusted to 8, the angle is 0,and in four directions, connecting the feature vectors extracted by different convolution kernels to be used as combined features. Fig. 4 reflects the L1 distance comparison of a pair of authenticity samples after the above operation.
5) Referring to fig. 6, firstly, a sample picture needs to be binarized, and then a shape index of the sample picture is calculated according to a formula, wherein a calculation method of the shape index is based on a hessian matrix, and specifically, the following steps are performed:
let us note λ 1 ,λ 2 Is two eigenvalues of the Hessian matrix, and 1 >λ 2 then the shape index is defined asThe values of the shape index reflect the shape as shown in FIG. 5.
Referring to fig. 5, it can be seen that the shape index of the flat region is not defined, so that these extraneous parts need to be removed, only the curvature of the ink mark edge is taken into consideration, and finally 36 levels are taken to calculate and count the gray histogram, as shown in fig. 7, comparing the chi-square distance of the true and false samples of the shape index feature.
The feature fusion in the invention is an integrated learning method, is an iterative structure, and specifically means that three predicted probability distribution values obtained after an individual feature passes through a sigmoid function in a logistic regression are used as the result of a previous weak learner, so that the weight of wrong data in the previous training is increased, and then the output value of the weak learner is subjected to logistic regression after splicing to obtain the accuracy rate of a strong classifier. In the process, the invention uses a PCA (component analysis) dimension reduction method to find an optimal linear transformation to reduce the dimension of the original data to a group of unit orthogonal bases, so that the maximum variance can be reserved. Therefore, the occurrence of overfitting of training data is prevented, and the training difficulty is reduced.
Claims (8)
1. An off-line signature authenticity detection method based on multi-feature fusion is characterized by comprising the following steps:
1) Dividing a training data set and a testing data set at equal proportion randomly;
2) Preprocessing a signature data set image sample;
3) Extracting local binary pattern features, and selecting an optimal distance measurement function;
4) Extracting the characteristics of a Gabor filter, and selecting the optimal combination design of the frequency and the angle of a Gabor convolution kernel and a characteristic vector;
5) Extracting the characteristic of the shape factor, and selecting the stage number which can achieve the best effect;
6) Fusing the characteristics obtained by the optimal solution from the step 3) to the step 5), and calculating the final precision on the test set.
2. The off-line signature authenticity detection method based on multi-feature fusion as claimed in claim 1, wherein the step 1) comprises the following steps: the data set has a total of M signature samples, each signature sample has 20 copies and 20M copies, each true signature sample has 8 copies of each two persons, namely 320M copies of false signature, and every two true and false signatures are combined to form S 1 =320M pairs of data samples, true signature and true signature are combined in pairsFor data samples, then the total data sample S = S 1 +S 2 For example, the total data samples are divided equally, and then the number of samples T in the training set is obtained 1 = number of samples in test set T 2 I.e. T 1 =T 2 =S/2。
3. The off-line signature authenticity detection method based on multi-feature fusion as claimed in claim 1, wherein the step 2) comprises the following steps:
2.1 The signature region is separately segmented, the Canny operator is used for obtaining the image edge, hough line detection and grid point coordinate positioning are carried out, image segmentation is carried out according to the coordinates, and grid lines are removed;
2.2 Using a method of solving the longest principal axis of inertia to perform tilt correction on the sample;
2.3 Adopting a block-based binarization method to distinguish the handwriting with light color and keeping the handwriting with light color;
2.4 Perform skeleton extraction operations on the handwriting.
4. The off-line signature authenticity detection method based on multi-feature fusion as claimed in claim 1, wherein the step 3) comprises the following steps: and setting the field of the local binary pattern as a circle of a central pixel point for sampling, and using a chi-square distance measurement function on the circular local binary pattern characteristic to obtain a result to be measured.
5. The off-line signature authenticity detection method based on multi-feature fusion as claimed in claim 1, characterized in that the operation mode of step 4) is: and (3) using a Gabor convolution kernel modulated by a Gaussian function and a cosine function, reflecting handwriting texture characteristics in different directions and scales by adjusting parameters of the Gabor convolution kernel, and carrying out weight normalization on the Gabor convolution kernel template L1.
6. The off-line signature authenticity detection method based on multi-feature fusion as claimed in claim 5, characterized in that the specific operation mode of L1 weight normalization of the Gabor convolution kernel template is as follows: the sample image after binarization pretreatment is sent into a Gabor filter kernel with the angles of 0, pi/4, pi/2, 3 pi/4 and the frequency of 8, a filter kernel template L1 is subjected to normalization operation, the image is subjected to the filter operation to obtain a gray level image with gray level values between (-1, 1), 64 levels are uniformly taken to count a gray level histogram of the image, the gray level distribution of the image is counted by normalization, namely a probability density function is calculated, and the chi-square distance is compared.
7. The off-line signature authenticity detection method based on multi-feature fusion as claimed in claim 1, wherein the step 5) comprises the following steps:
5.1 Computing a shape index, removing undefined parts and only paying attention to the curvature of the handwriting edge;
5.2 The histogram of the k-level calculation image is similarly taken, the chi-square distance is calculated, and the authenticity of the signature to be detected is judged by comparing the distances inside or outside the class.
8. The off-line signature authenticity detection method based on multi-feature fusion as claimed in claim 1, wherein the step 6) comprises the following steps:
6.1 Fusing local binary pattern features, gabor filter features and shape factor features, and using PCA to reduce dimensions;
6.2 The predicted values of the three individual features after regression are weighted and calculated by logistic regression by using an ensemble learning method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210867739.7A CN115311746A (en) | 2022-07-22 | 2022-07-22 | Off-line signature authenticity detection method based on multi-feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210867739.7A CN115311746A (en) | 2022-07-22 | 2022-07-22 | Off-line signature authenticity detection method based on multi-feature fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115311746A true CN115311746A (en) | 2022-11-08 |
Family
ID=83856379
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210867739.7A Pending CN115311746A (en) | 2022-07-22 | 2022-07-22 | Off-line signature authenticity detection method based on multi-feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115311746A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116758077A (en) * | 2023-08-18 | 2023-09-15 | 山东航宇游艇发展有限公司 | Online detection method and system for surface flatness of surfboard |
CN117475519A (en) * | 2023-12-26 | 2024-01-30 | 厦门理工学院 | Off-line handwriting identification method based on integration of twin network and multiple channels |
-
2022
- 2022-07-22 CN CN202210867739.7A patent/CN115311746A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116758077A (en) * | 2023-08-18 | 2023-09-15 | 山东航宇游艇发展有限公司 | Online detection method and system for surface flatness of surfboard |
CN116758077B (en) * | 2023-08-18 | 2023-10-20 | 山东航宇游艇发展有限公司 | Online detection method and system for surface flatness of surfboard |
CN117475519A (en) * | 2023-12-26 | 2024-01-30 | 厦门理工学院 | Off-line handwriting identification method based on integration of twin network and multiple channels |
CN117475519B (en) * | 2023-12-26 | 2024-03-12 | 厦门理工学院 | Off-line handwriting identification method based on integration of twin network and multiple channels |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110443128B (en) | Finger vein identification method based on SURF feature point accurate matching | |
Zhang et al. | Contact lens detection based on weighted LBP | |
US9633269B2 (en) | Image-based liveness detection for ultrasonic fingerprints | |
Zhang et al. | Fingerprint classification based on extraction and analysis of singularities and pseudo ridges | |
CN110298376B (en) | Bank bill image classification method based on improved B-CNN | |
CN110543822A (en) | finger vein identification method based on convolutional neural network and supervised discrete hash algorithm | |
CN104217221A (en) | Method for detecting calligraphy and paintings based on textural features | |
CN113392856B (en) | Image forgery detection device and method | |
CN115311746A (en) | Off-line signature authenticity detection method based on multi-feature fusion | |
Ali et al. | Multimodal biometrics enhancement recognition system based on fusion of fingerprint and palmprint: a review | |
CN110232390B (en) | Method for extracting image features under changed illumination | |
CN110415222A (en) | A kind of spinning cake side face defects recognition methods based on textural characteristics | |
CN113011426A (en) | Method and device for identifying certificate | |
Bo et al. | Fingerprint singular point detection algorithm by poincaré index | |
Latha et al. | A robust person authentication system based on score level fusion of left and right irises and retinal features | |
Huang et al. | Noise removal and impainting model for iris image | |
Méndez-Llanes et al. | On the use of local fixations and quality measures for deep face recognition | |
George et al. | A survey on prominent iris recognition systems | |
Kuban et al. | A NOVEL MODIFICATION OF SURF ALGORITHM FOR FINGERPRINT MATCHING. | |
CN111898405A (en) | Three-dimensional human ear recognition method based on 3DHarris key points and optimized SHOT characteristics | |
CN110119691A (en) | A kind of portrait localization method that based on local 2D pattern and not bending moment is searched | |
Gandhi et al. | Sift algorithm for iris feature extraction | |
AU2020102066A4 (en) | DWT Based Feature Extraction for Iris Recognition | |
CN111915582B (en) | Image tampering detection method based on brightness characteristic coupling information quantity constraint | |
Tian et al. | A practical iris recognition algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |